Search Results

Search found 43856 results on 1755 pages for 'header files'.

Page 382/1755 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Build Open JDK 7 on Mac OSX (TOTD #172)

    - by arungupta
    The complete requirements, pre-requisites, and steps to build OpenJDK 7 port on Mac OSX are described here. The steps are very clearly explained and here are the exact ones I followed on my MacBook Pro 10.7.2: Confirm the version of pre-installed Java as: > java -versionjava version "1.6.0_26"Java(TM) SE Runtime Environment (build 1.6.0_26-b03-383-11A511c)Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02-383, mixed mode) Download and install Mercurial from mercurial.berkwood.com (zip bundle for 10.7 is here). It gets installed in the /usr/local/bin directory. Get the source code as (commands highlighted in bold): hg clone http://hg.openjdk.java.net/macosx-port/macosx-port destination directory: macosx-port requesting all changes adding changesets adding manifests adding file changes added 437 changesets with 364 changes to 33 files updating to branch default 31 files updated, 0 files merged, 0 files removed, 0 files unresolved cd macosx-port chmod 7555 get_source.sh ./get_source.sh # Repos:  corba jaxp jaxws langtools jdk hotspot Starting on corba Starting on jaxp Starting on jaxws Starting on langtools Starting on jdk Starting on hotspot # hg clone http://hg.openjdk.java.net/macosx-port/macosx-port/corba corba requesting all changes adding changesets adding manifests adding file changes added 396 changesets with 3275 changes to 1379 files . . . # exit code 0 # cd ./corba && hg pull -u pulling from http://hg.openjdk.java.net/macosx-port/macosx-port/corba searching for changes no changes found # exit code 0 # cd ./jaxp && hg pull -u pulling from http://hg.openjdk.java.net/macosx-port/macosx-port/jaxp searching for changes no changes found # exit code 0 Install Xcode from the App Store. Include /Developer/usr/bin in PATH. Note: JDK 1.6.0_26 ame pre-installed on my laptop and I installed Xode after that. The compilation went fine and there was no need to re-install the Java for Mac OS X as mentioned in the original steps. Build the code as: make ALLOW_DOWNLOADS=true SA_APPLE_BOOT_JAVA=true ALWAYS_PASS_TEST_GAMMA=true ALT_BOOTDIR=`/usr/libexec/java_home -v 1.6` HOTSPOT_BUILD_JOBS=`sysctl -n hw.ncpu` The final output is shown as: >>>Finished making images @ Sat Nov 19 00:59:04 WET 2011 ... >>>Finished making images @ Sat Nov 19 00:59:04 WET 2011 ...############################################################################# Leaving jdk for target(s) sanity all docs images ################################################################################## Build time 00:17:42 jdk for target(s) sanity all docs images ############################################################################### Build times ##########Target all_product_buildStart 2011-11-19 00:32:40End 2011-11-19 00:59:0400:01:46 corba00:04:07 hotspot00:00:51 jaxp00:01:21 jaxws00:17:42 jdk00:00:37 langtools00:26:24 TOTAL######################### Change the directory and verify the version: >cd build/macosx-universal/j2sdk-image/1.7.0.jdk/Contents/Home/bin >./java -version openjdk version "1.7.0-internal" OpenJDK Runtime Environment (build 1.7.0-internal-arungup_2011_11_19_00_32-b00) OpenJDK 64-Bit Server VM (build 21.0-b17, mixed mode) Now go fix some bugs, file new bugs, or discuss at the macosx-port-dev mailing list.

    Read the article

  • nTop RRD file architecture

    - by Seanny123
    I have a gig of nTop RRD files and I would like to start graphing them with rrdtool (but not with nTop, since I'm hoping to do this with a separate backup of the database as workaround to the impossibility of limiting the RRD files by size), but I don't know how the files are structured. I've tried reading the RRD documentation from SourceForge and the nTop FAQ, but I'm not finding the information I need. Does anyone know of any documentation I should be looking at or how the files are structured? Here https://dl.dropbox.com/u/669437/file%20structure.png is a screenshot of the file structure. At first I thought it was organized by IP address (so the rrd files for address 1.1.2.3 would be stored in folder 1-1-2-3 or even the reverse order), but that doesn't seem to be the case. It isn't organized by MAC address either, although some hosts are saved that way. Any help would be appreciated.

    Read the article

  • "You need to confirm this operation" message when trying to delete a file

    - by Richard West
    I'm trying to delete a few files that I created, and I have full permissions on, on a Windows 2008 system. The files are within a folder that I created so they are not system files of any kind. The message box that pops up when I try to delete the file is titled "Destination Folder Access Denied", and the message is "you need to confirm this operation", with a continue, skip or cancel button. I disabled UAC and rebooted to see if this would make the message go away -- it did not. However, with UAC disabled I am able to click on continue and the files are deleted. With UAC enabled I had to provide elevated credientials before the files would delete. What causes this behaviour and how can I remove it?

    Read the article

  • Flexible cloud file storage for a web.py app?

    - by benwad
    I'm creating a web app using web.py (although I may later rewrite it for Tornado) which involves a lot of file manipulation. One example, the app will have a git-style 'commit' operation, in which some files are sent to the server and placed in a new folder along with the unchanged files from the last version. This will involve copying the old folder to the new folder, replacing/adding/deleting the files in the commit to the new folder, then deleting all unchanged files in the old folder (as they are now in the new folder). I've decided on Heroku for the app hosting environment, and I am currently looking at cloud storage options that are built with these kinds of operations in mind. I was thinking of Amazon S3, however I'm not sure if that lets you carry out these kinds of file operations in-place. I was thinking I may have to load these files into the server's RAM and then re-insert them into the bucket, costing me a fortune. I was also thinking of Progstr Filer (http://filer.progstr.com/index.html) but that seems to only integrate with Rails apps. Can anyone help with this? Basically I want file operations to be as cheap as possible.

    Read the article

  • Using Oracle Database's 11gR2 New ASM Features During ASM Migration

    Oracle Database 11gR2 offers several new Automatic Storage Management features for managing both Oracle database files as well as files stored within its new ASM Clustered File System. This article illustrates how to upgrade an Oracle database quickly and efficiently from version 11gR1 to 11gR2 and then migrate all of its database files so they&#146;re resident within ASM-managed storage.

    Read the article

  • Rackspace copy script failure message "java.net.SocketTimeoutException: Read timed out"

    - by user53864
    I am using Rackspace for Ubuntu cloud server. Everyday a script(I guess the script is from rackspace) executes on the cloud servers which copies the backfile to the Rackspace CloudFiles and sends the mail as if the files are copied and I've scheduled the script on the cloud servers. I've no much knowledge of the script and I guess the script is based on Cruise(as I could see build.xml, some jar files ...). Everyday the files are copied to the Rackspace from cloud servers but sometimes don't know why, the files will be copied to Rackspace sending an error failure message or sometimes the files will not be copied and sends the error failure message like the one below. Error while backing up on Station1 on 03/03/2011 04:50 AM and reason for error is java.net.SocketTimeoutException: Read timed out Anybody using Rackspace?, anybody has any fix for this?

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Framed Office Web Apps SharePoint 2010

    - by webbes
    Unfortunately the X-Frame header, that is added by the Office Web Apps service, prevents Internet Explorer to render office documents in an I-Frame! To solve this we've create a very simple HttpModule that checks for the header and changes the value from "DENY" to "SAMEORIGIN". This post simply shows the code for such a module that enables previewing of documents with Office Web Apps inside an I-Frame....(read more)

    Read the article

  • Recover that Photo, Picture or File You Deleted Accidentally

    - by The Geek
    Have you ever accidentally deleted a photo on your camera, computer, USB drive, or anywhere else? What you might not know is that you can usually restore those pictures—even from your camera’s memory stick. Windows tries to prevent you from making a big mistake by providing the Recycle Bin, where deleted files hang around for a while—but unfortunately it doesn’t work for external USB drives, USB flash drives, memory sticks, or mapped drives. The great news is that this technique also works if you accidentally deleted the photo… from the camera itself. That’s what happened to me, and prompted writing this article. Restore that File or Photo using Recuva The first piece of software that you’ll want to try is called Recuva, and it’s extremely easy to use—just make sure when you are installing it, that you don’t accidentally install that stupid Yahoo! toolbar that nobody wants. Now that you’ve installed the software, and avoided an awful toolbar installation, launch the Recuva wizard and let’s start through the process of recovering those pictures you shouldn’t have deleted. The first step on the wizard page will let you tell Recuva to only search for a specific type of file, which can save a lot of time while searching, and make it easier to find what you are looking for. Next you’ll need to specify where the file was, which will obviously be up to wherever you deleted it from. Since I deleted mine from my camera’s SD card, that’s where I’m looking for it. The next page will ask you whether you want to do a Deep Scan. My recommendation is to not select this for the first scan, because usually the quick scan can find it. You can always go back and run a deep scan a second time. And now, you’ll see all of the pictures deleted from your drive, memory stick, SD card, or wherever you searched. Looks like what happened in Vegas didn’t stay in Vegas after all… If there are a really large number of results, and you know exactly when the file was created or modified, you can switch to the advanced view, where you can sort by the last modified time. This can help speed up the process quite a bit, so you don’t have to look through quite as many files. At this point, you can right-click on any filename, and choose to Recover it, and then save the files elsewhere on your drive. Awesome! Restore that File or Photo using DiskDigger If you don’t have any luck with Recuva, you can always try out DiskDigger, another excellent piece of software. I’ve tested both of these applications very thoroughly, and found that neither of them will always find the same files, so it’s best to have both of them in your toolkit. Note that DiskDigger doesn’t require installation, making it a really great tool to throw on your PC repair Flash drive. Start off by choosing the drive you want to recover from…   Now you can choose whether to do a deep scan, or a really deep scan. Just like with Recuva, you’ll probably want to select the first one first. I’ve also had much better luck with the regular scan, rather than the “dig deeper” one. If you do choose the “dig deeper” one, you’ll be able to select exactly which types of files you are looking for, though again, you should use the regular scan first. Once you’ve come up with the results, you can click on the items on the left-hand side, and see a preview on the right.  You can select one or more files, and choose to restore them. It’s pretty simple! Download DiskDigger from dmitrybrant.com Download Recuva from piriform.com Good luck recovering your deleted files! And keep in mind, DiskDigger is a totally free donationware software from a single, helpful guy… so if his software helps you recover a photo you never thought you’d see again, you might want to think about throwing him a dollar or two. Similar Articles Productive Geek Tips Stupid Geek Tricks: Undo an Accidental Move or Delete With a Keyboard ShortcutRestore Accidentally Deleted Files with RecuvaCustomize Your Welcome Picture Choices in Windows VistaAutomatically Resize Picture Attachments in Outlook 2007Resize Your Photos with Easy Thumbnails TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • A DirectoryCatalog class for Silverlight MEF (Managed Extensibility Framework)

    - by Dixin
    In the MEF (Managed Extension Framework) for .NET, there are useful ComposablePartCatalog implementations in System.ComponentModel.Composition.dll, like: System.ComponentModel.Composition.Hosting.AggregateCatalog System.ComponentModel.Composition.Hosting.AssemblyCatalog System.ComponentModel.Composition.Hosting.DirectoryCatalog System.ComponentModel.Composition.Hosting.TypeCatalog While in Silverlight, there is a extra System.ComponentModel.Composition.Hosting.DeploymentCatalog. As a wrapper of AssemblyCatalog, it can load all assemblies in a XAP file in the web server side. Unfortunately, in silverlight there is no DirectoryCatalog to load a folder. Background There are scenarios that Silverlight application may need to load all XAP files in a folder in the web server side, for example: If the Silverlight application is extensible and supports plug-ins, there would be a /ClinetBin/Plugins/ folder in the web server, and each pluin would be an individual XAP file in the folder. In this scenario, after the application is loaded and started up, it would like to load all XAP files in /ClinetBin/Plugins/ folder. If the aplication supports themes, there would be a /ClinetBin/Themes/ folder, and each theme would be an individual XAP file too. The application would qalso need to load all XAP files in /ClinetBin/Themes/. It is useful if we have a DirectoryCatalog: DirectoryCatalog catalog = new DirectoryCatalog("/Plugins"); catalog.DownloadCompleted += (sender, e) => { }; catalog.DownloadAsync(); Obviously, the implementation of DirectoryCatalog is easy. It is just a collection of DeploymentCatalog class. Retrieve file list from a directory Of course, to retrieve file list from a web folder, the folder’s “Directory Browsing” feature must be enabled: So when the folder is requested, it responses a list of its files and folders: This is nothing but a simple HTML page: <html> <head> <title>localhost - /Folder/</title> </head> <body> <h1>localhost - /Folder/</h1> <hr> <pre> <a href="/">[To Parent Directory]</a><br> <br> 1/3/2011 7:22 PM 185 <a href="/Folder/File.txt">File.txt</a><br> 1/3/2011 7:22 PM &lt;dir&gt; <a href="/Folder/Folder/">Folder</a><br> </pre> <hr> </body> </html> For the ASP.NET Deployment Server of Visual Studio, directory browsing is enabled by default: The HTML <Body> is almost the same: <body bgcolor="white"> <h2><i>Directory Listing -- /ClientBin/</i></h2> <hr width="100%" size="1" color="silver"> <pre> <a href="/">[To Parent Directory]</a> Thursday, January 27, 2011 11:51 PM 282,538 <a href="Test.xap">Test.xap</a> Tuesday, January 04, 2011 02:06 AM &lt;dir&gt; <a href="TestFolder/">TestFolder</a> </pre> <hr width="100%" size="1" color="silver"> <b>Version Information:</b>&nbsp;ASP.NET Development Server 10.0.0.0 </body> The only difference is, IIS’s links start with slash, but here the links do not. Here one way to get the file list is read the href attributes of the links: [Pure] private IEnumerable<Uri> GetFilesFromDirectory(string html) { Contract.Requires(html != null); Contract.Ensures(Contract.Result<IEnumerable<Uri>>() != null); return new Regex( "<a href=\"(?<uriRelative>[^\"]*)\">[^<]*</a>", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant) .Matches(html) .OfType<Match>() .Where(match => match.Success) .Select(match => match.Groups["uriRelative"].Value) .Where(uriRelative => uriRelative.EndsWith(".xap", StringComparison.Ordinal)) .Select(uriRelative => { Uri baseUri = this.Uri.IsAbsoluteUri ? this.Uri : new Uri(Application.Current.Host.Source, this.Uri); uriRelative = uriRelative.StartsWith("/", StringComparison.Ordinal) ? uriRelative : (baseUri.LocalPath.EndsWith("/", StringComparison.Ordinal) ? baseUri.LocalPath + uriRelative : baseUri.LocalPath + "/" + uriRelative); return new Uri(baseUri, uriRelative); }); } Please notice the folders’ links end with a slash. They are filtered by the second Where() query. The above method can find files’ URIs from the specified IIS folder, or ASP.NET Deployment Server folder while debugging. To support other formats of file list, a constructor is needed to pass into a customized method: /// <summary> /// Initializes a new instance of the <see cref="T:System.ComponentModel.Composition.Hosting.DirectoryCatalog" /> class with <see cref="T:System.ComponentModel.Composition.Primitives.ComposablePartDefinition" /> objects based on all the XAP files in the specified directory URI. /// </summary> /// <param name="uri"> /// URI to the directory to scan for XAPs to add to the catalog. /// The URI must be absolute, or relative to <see cref="P:System.Windows.Interop.SilverlightHost.Source" />. /// </param> /// <param name="getFilesFromDirectory"> /// The method to find files' URIs in the specified directory. /// </param> public DirectoryCatalog(Uri uri, Func<string, IEnumerable<Uri>> getFilesFromDirectory) { Contract.Requires(uri != null); this._uri = uri; this._getFilesFromDirectory = getFilesFromDirectory ?? this.GetFilesFromDirectory; this._webClient = new Lazy<WebClient>(() => new WebClient()); // Initializes other members. } When the getFilesFromDirectory parameter is null, the above GetFilesFromDirectory() method will be used as default. Download the directory’s XAP file list Now a public method can be created to start the downloading: /// <summary> /// Begins downloading the XAP files in the directory. /// </summary> public void DownloadAsync() { this.ThrowIfDisposed(); if (Interlocked.CompareExchange(ref this._state, State.DownloadStarted, State.Created) == 0) { this._webClient.Value.OpenReadCompleted += this.HandleOpenReadCompleted; this._webClient.Value.OpenReadAsync(this.Uri, this); } else { this.MutateStateOrThrow(State.DownloadCompleted, State.Initialized); this.OnDownloadCompleted(new AsyncCompletedEventArgs(null, false, this)); } } Here the HandleOpenReadCompleted() method is invoked when the file list HTML is downloaded. Download all XAP files After retrieving all files’ URIs, the next thing becomes even easier. HandleOpenReadCompleted() just uses built in DeploymentCatalog to download the XAPs, and aggregate them into one AggregateCatalog: private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { using (StreamReader reader = new StreamReader(e.Result)) { string html = reader.ReadToEnd(); IEnumerable<Uri> uris = this._getFilesFromDirectory(html); Contract.Assume(uris != null); IEnumerable<DeploymentCatalog> deploymentCatalogs = uris.Select(uri => new DeploymentCatalog(uri)); deploymentCatalogs.ForEach( deploymentCatalog => { this._aggregateCatalog.Catalogs.Add(deploymentCatalog); deploymentCatalog.DownloadCompleted += this.HandleDownloadCompleted; }); deploymentCatalogs.ForEach(deploymentCatalog => deploymentCatalog.DownloadAsync()); } } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } // Exception handling. } In HandleDownloadCompleted(), if all XAPs are downloaded without exception, OnDownloadCompleted() callback method will be invoked. private void HandleDownloadCompleted(object sender, AsyncCompletedEventArgs e) { if (Interlocked.Increment(ref this._downloaded) == this._aggregateCatalog.Catalogs.Count) { this.OnDownloadCompleted(e); } } Exception handling Whether this DirectoryCatelog can work only if the directory browsing feature is enabled. It is important to inform caller when directory cannot be browsed for XAP downloading. private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { // No exception thrown when browsing directory. Downloads the listed XAPs. } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } WebException webException = error as WebException; if (webException != null) { HttpWebResponse webResponse = webException.Response as HttpWebResponse; if (webResponse != null) { // Internally, WebClient uses WebRequest.Create() to create the WebRequest object. Here does the same thing. WebRequest request = WebRequest.Create(Application.Current.Host.Source); Contract.Assume(request != null); if (request.CreatorInstance == WebRequestCreator.ClientHttp && // Silverlight is in client HTTP handling, all HTTP status codes are supported. webResponse.StatusCode == HttpStatusCode.Forbidden) { // When directory browsing is disabled, the HTTP status code is 403 (forbidden). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_ClientHttp, webException); } else if (request.CreatorInstance == WebRequestCreator.BrowserHttp && // Silverlight is in browser HTTP handling, only 200 and 404 are supported. webResponse.StatusCode == HttpStatusCode.NotFound) { // When directory browsing is disabled, the HTTP status code is 404 (not found). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_BrowserHttp, webException); } } } this.OnDownloadCompleted(new AsyncCompletedEventArgs(error, cancelled, this)); } Please notice Silverlight 3+ application can work either in client HTTP handling, or browser HTTP handling. One difference is: In browser HTTP handling, only HTTP status code 200 (OK) and 404 (not OK, including 500, 403, etc.) are supported In client HTTP handling, all HTTP status code are supported So in above code, exceptions in 2 modes are handled differently. Conclusion Here is the whole DirectoryCatelog’s looking: Please click here to download the source code, a simple unit test is included. This is a rough implementation. And, for convenience, some design and coding are just following the built in AggregateCatalog class and Deployment class. Please feel free to modify the code, and please kindly tell me if any issue is found.

    Read the article

  • Google Analytics setting cookies on static content despite being on entirely separate domain

    - by Donald Jenkins
    I recently decided to comply with the YSlow recommendation that static content is hosted on a cookieless domain. As I already use the root of my domain (donaldjenkins.com) to host my website—on which Google Analytics sets a few cookies—that meant I had to move the CNAME URL for the CDN serving the static files from cdn.donaldjenkins.com to an entirely separate, dedicated domain. I purchased cdn.dj (yes, it's a real Djibouti domain name), hosted the files on the root (which contains nothing else, other than a robots.txt file) and set a CNAME of e.cdn.dj for the CDN. This setup works, but I was rather surprised to find that YSlow was still flagging the static files for not being cookie-free: here's a screenshot: The cdn.djdomain was new, and was never used for anything other than hosting these static files. Running httpfox on the site shows the _utma and _utmz Google Analytics cookies are being set on the static files listed above—despite their being hosted on an entirely separate, dedicated domain. Here's my Google Analytics code: //Google Analytics tracking code var _gaq=[['_setAccount','UA-5245947-5'],['_trackPageview']]; (function(d,t){var g=d.createElement(t),s=d.getElementsByTagName(t)[0]; g.src=('https:'==location.protocol?'//ssl':'//www')+'.google-analytics.com/ga.js'; s.parentNode.insertBefore(g,s)}(document,'script')); // [END] Google Analytics tracking code I'm not obsessing about this issue—I know it's not really affecting server performance—but I'd like to just understand what is causing it not to go away...

    Read the article

  • directory listing on Mac OS X

    - by user27150
    I dumped a bunch of files (music and otherwise) onto my shiny new Macbook, and since I'm more comfortable with linux than Mac (at this point) I tend to use the terminal. I did a ls -al on the files I'd transfered, and some had an "@" at the end of the permissions string, and some did not. Something like: drwxrwxr--@ 93 user staff etc. drwxrwxr-- 107 user staff etc. The ones without "@" could be seen in Finder and accessed by other programs-- the "@" files and directories were invisible. Can anyone explain what the "@" means, and how to chmod (or whatever) so I can use these files? I assume it is some sort of system flag but I don't know how to unset it. Chmod 777 had no effect and I already own the files. Thanks

    Read the article

  • determining if .htaccess is working

    - by Toc
    Following some guide on the web, I have created the following .htaccess for my WordPress installation: # protect the htaccess file <files .htaccess> order allow,deny deny from all </files> # protect wpconfig.php <files wp-config.php> order allow,deny deny from all </files> plus chmod wp-config.php 600 and .htaccess 644. Which is the simplest way I can test if it is working properly? In case, I can create some other files to verify the work. I only want to be sure.

    Read the article

  • determining if .htaccess is working

    - by Toc
    Following some guide on the web, I have created the following .htaccess for my WordPress installation: # protect the htaccess file <files .htaccess> order allow,deny deny from all </files> # protect wpconfig.php <files wp-config.php> order allow,deny deny from all </files> plus chmod wp-config.php 600 and .htaccess 644. Which is the simplest way I can test if it is working properly? In case, I can create some other files to verify the work. I only want to be sure.

    Read the article

  • SQL SERVER – Database in RESTORING State for Long Time

    - by Pinal Dave
    A very interesting question I received the other day. “Our database has been in restoring stage for a long time. We have already restored all the necessary files there. After restoring the files we are expecting that  the database will be in operational mode, however, it is continuously in the restoring mode. Any suggestion?” The question is very common. I sent user follow up emails to understand what is actually going on with the user. I realized after restoring their bak files and log files their database was in the restoring state because they had not restored the latest log file with RECOVERY options. As they had completed all the database restore sequence (bak and log in order), the real need for them was to recover the database from norecovery state. User can restore log files till the database is no recovery mode. If the database is recovered it will be in operation and it can continue database operation. If the database has another operations we cannot restore further log as the chain of the log file after the database is recovered is meaningless. This is the reason why the database has to be norecovery state when it is restored. There are three different ways to recover the database. 1) Recover the database manually with following command. RESTORE DATABASE database_name WITH RECOVERY 2) Recover the database with the last log file. RESTORE LOG database_name FROM backup_device WITH RECOVERY 3) Recover the database when bak is restored RESTORE DATABASE database_name FROM backup_device WITH RECOVERY To understand how the backup restores timeline works read Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SFTP: file symlinks in a jailed (chrooted) directory

    - by Kevin Duke
    I'm trying to set up sftp so that a few trusted people can access/edit/create some files. I have jailed a user into their home directory (/home/name) but have run into a problem. I want for them to also be able to access other parts of the VPS because it is also a game server, webhost, etc, and I want for them to be able to have full control of files outside their jailed directory. I tried making a symlink (ln -s) to the desired directory but it does not work, as expected. I tried (cp -rl) to the files that I wanted to give access and it worked -- they can edit the files in their directory and it changes the one stored outside of jail. BUT they cannot create new files (they can but it won't update outside of jail). I know I'm probably not doing this the "right way" but what can I do to do what I want?

    Read the article

  • Crypto-Analysis of keylogger logs and config file. Possible?

    - by lost.
    Is there anyway Encryption on an unidentified file can be broken(file in question: config file and log files from ardamax keylogger). These files date back all the way to 2008. I searched everywhere, nothing on slashdot, nothing on google. Ardamax Keyviewer? Should I just write to Ardamax? I am at a loss of what to do. I feel comprimised. Anyone managed to decrpyt files with Crypto-analysis? More Information-- There are log files in the folder and a configuration file, "akv.cfg". Is it possible to decrypt the files and maybe getting the attackers email address used to receive the keylogger logs? I've Checked ardamax.com. They have an built-in log viewer. But its unavailable for download. If superuser isn't the proper place to ask, know where I might get help?

    Read the article

  • How backup and restore Horde manualy

    - by Thomas
    how can I backup and restore my horde sql database The Databse ist located in /var/lib/mysql/horde There are many *.frm files and one db.opt My Server ist broken, so I want to reinstall it. Can i copy this files to a USB Stick, then restore the entire server without horde and then simply copy the files in the same directory? Or habe I do something like "mysqldump" to delete an reinztall the database? Thank you, Thomas

    Read the article

  • how to restrict access to all .txt file in apache except robots.txt?

    - by user3162764
    I am configuring apache2 on debian and would like to allow only robots.txt to be accessed for searching engines, while other .txt files are restricted, I tried to add the followings to .htaccess but no luck: <Files robots.txt> Order Allow,Deny Allow from All </Files> <Files *.txt> Order Deny,Allow Deny from All </Files> Can anyone help or give me some hints? I am new comer to apache, thanks a lot.

    Read the article

  • IIS 7.5 401.3 Access Denied

    - by Jeffrey
    I am having this weird issue with IIS 7.5 on Windows 2008 R2 x64. I created a site in IIS and manually created a test file index.html and everything worked. When I try to do a deployment, I copy all the files from my local PC to the IIS server, try to access index.html (this is the proper deployed file) and getting 401.3 access denied error. I then try to manually recreate index.html and copy content into this newly created file and the page is accessible again... I just can't figure this out. So the issue is that IIS 7.5 can't server files that have been copied from other PCs. I tried to reset/apply permission settings to the copied folders/files but nothing has worked. Please help. Thanks! By the way, the files that I copied are just some html cutups i.e. generic html, css and image files, nothing special.

    Read the article

  • How to share media stored in an attached drive using Windows Media Player?

    - by David
    We've got a Windows 7 PC with a Windows Media Player music library that includes both files on the internal hard drive and files on a USB-attached hard drive. When we browse this library from another Windows Media Player (on another Windows 7 machine) we see only the files residing on the library host's internal drive. The files residing on the attached drive don't show up at all, yet on the host they appear undistinguished within the library. Is there a configuration change we can make to cause the attached files to be shared properly? We've turned on read-sharing for "Everyone" on the USB drive, but that hasn't helped. Also it might be worth noting that this issue behaves the same way for us if the client machine is a Playstation 3.

    Read the article

  • Ajax Control Toolkit May 2012 Release

    - by Stephen.Walther
    I’m happy to announce the May 2012 release of the Ajax Control Toolkit. This newest release of the Ajax Control Toolkit includes a new file upload control which displays file upload progress. We’ve also added several significant enhancements to the existing HtmlEditorExtender control such as support for uploading images and Source View. You can download and start using the newest version of the Ajax Control Toolkit by entering the following command in the Library Package Manager console in Visual Studio: Install-Package AjaxControlToolkit Alternatively, you can download the latest version of the Ajax Control Toolkit from CodePlex: http://AjaxControlToolkit.CodePlex.com The New Ajax File Upload Control The most requested new feature for the Ajax Control Toolkit (according to the CodePlex Issue Tracker) has been support for file upload with progress. We worked hard over the last few months to create an entirely new file upload control which displays upload progress. Here is a sample which illustrates how you can use the new AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="01_FileUpload.aspx.cs" Inherits="WebApplication1._01_FileUpload" %> <html> <head runat="server"> <title>Simple File Upload</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager control. This control is required to use any of the controls in the Ajax Control Toolkit because this control is responsible for loading all of the scripts required by a control. The page also contains an AjaxFileUpload control. The UploadComplete event is handled in the code-behind for the page: namespace WebApplication1 { public partial class _01_FileUpload : System.Web.UI.Page { protected void ajaxUpload1_OnUploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save upload file to the file system ajaxUpload1.SaveAs(MapPath(filePath)); } } } The UploadComplete handler saves each uploaded file by calling the AjaxFileUpload control’s SaveAs() method with a full file path. Here’s a video which illustrates the process of uploading a file: Warning: in order to write to the Images folder on a production IIS server, you need Write permissions on the Images folder. You need to provide permissions for the IIS Application Pool account to write to the Images folder. To learn more, see: http://learn.iis.net/page.aspx/624/application-pool-identities/ Showing File Upload Progress The new AjaxFileUpload control takes advantage of HTML5 upload progress events (described in the XMLHttpRequest Level 2 standard). This standard is supported by Firefox 8+, Chrome 16+, Safari 5+, and Internet Explorer 10+. In other words, the standard is supported by the most recent versions of all browsers except for Internet Explorer which will support the standard with the release of Internet Explorer 10. The AjaxFileUpload control works with all browsers, even browsers which do not support the new XMLHttpRequest Level 2 standard. If you use the AjaxFileUpload control with a downlevel browser – such as Internet Explorer 9 — then you get a simple throbber image during a file upload instead of a progress indicator. Here’s how you specify a throbber image when declaring the AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="02_FileUpload.aspx.cs" Inherits="WebApplication1._02_FileUpload" %> <html> <head id="Head1" runat="server"> <title>File Upload with Throbber</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="MyThrobber" runat="server" /> <asp:Image id="MyThrobber" ImageUrl="ajax-loader.gif" Style="display:None" runat="server" /> </div> </form> </body> </html> Notice that the page above includes an image with the Id MyThrobber. This image is displayed while files are being uploaded. I use the website http://AjaxLoad.info to generate animated busy wait images. Drag-And-Drop File Upload If you are using an uplevel browser then you can drag-and-drop the files which you want to upload onto the AjaxFileUpload control. The following video illustrates how drag-and-drop works: Remember that drag-and-drop will not work on Internet Explorer 9 or older. Accepting Multiple Files By default, the AjaxFileUpload control enables you to upload multiple files at a time. When you open the file dialog, use the CTRL or SHIFT key to select multiple files. If you want to restrict the number of files that can be uploaded then use the MaximumNumberOfFiles property like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" MaximumNumberOfFiles="1" runat="server" /> In the code above, the maximum number of files which can be uploaded is restricted to a single file. Restricting Uploaded File Types You might want to allow only certain types of files to be uploaded. For example, you might want to accept only image uploads. In that case, you can use the AllowedFileTypes property to provide a list of allowed file types like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" AllowedFileTypes="jpg,jpeg,gif,png" runat="server" /> The code above prevents any files except jpeg, gif, and png files from being uploaded. Enhancements to the HTMLEditorExtender Over the past months, we spent a considerable amount of time making bug fixes and feature enhancements to the existing HtmlEditorExtender control. I want to focus on two of the most significant enhancements that we made to the control: support for Source View and support for uploading images. Adding Source View Support to the HtmlEditorExtender When you click the Source View tag, the HtmlEditorExtender changes modes and displays the HTML source of the contents contained in the TextBox being extended. You can use Source View to make fine-grain changes to HTML before submitting the HTML to the server. For reasons of backwards compatibility, the Source View tab is disabled by default. To enable Source View, you need to declare your HtmlEditorExtender with the DisplaySourceTab property like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="05_SourceView.aspx.cs" Inherits="WebApplication1._05_SourceView" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head id="Head1" runat="server"> <title>HtmlEditorExtender with Source View</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:TextBox id="txtComments" TextMode="MultiLine" Columns="60" Rows="10" Runat="server" /> <ajaxToolkit:HtmlEditorExtender id="HEE1" TargetControlID="txtComments" DisplaySourceTab="true" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager, TextBox, and HtmlEditorExtender control. The HtmlEditorExtender extends the TextBox so that it supports rich text editing. Notice that the HtmlEditorExtender includes a DisplaySourceTab property. This property causes a button to appear at the bottom of the HtmlEditorExtender which enables you to switch to Source View: Note: when using the HtmlEditorExtender, we recommend that you set the DOCTYPE for the document. Otherwise, you can encounter weird formatting issues. Accepting Image Uploads We also enhanced the HtmlEditorExtender to support image uploads (another very highly requested feature at CodePlex). The following video illustrates the experience of adding an image to the editor: Once again, for backwards compatibility reasons, support for image uploads is disabled by default. Here’s how you can declare the HtmlEditorExtender so that it supports image uploads: <ajaxToolkit:HtmlEditorExtender id="MyHtmlEditorExtender" TargetControlID="txtComments" OnImageUploadComplete="MyHtmlEditorExtender_ImageUploadComplete" DisplaySourceTab="true" runat="server" > <Toolbar> <ajaxToolkit:Bold /> <ajaxToolkit:Italic /> <ajaxToolkit:Underline /> <ajaxToolkit:InsertImage /> </Toolbar> </ajaxToolkit:HtmlEditorExtender> There are two things that you should notice about the code above. First, notice that an InsertImage toolbar button is added to the HtmlEditorExtender toolbar. This HtmlEditorExtender will render toolbar buttons for bold, italic, underline, and insert image. Second, notice that the HtmlEditorExtender includes an event handler for the ImageUploadComplete event. The code for this event handler is below: using System.Web.UI; using AjaxControlToolkit; namespace WebApplication1 { public partial class _06_ImageUpload : System.Web.UI.Page { protected void MyHtmlEditorExtender_ImageUploadComplete(object sender, AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save uploaded file to the file system var ajaxFileUpload = (AjaxFileUpload)sender; ajaxFileUpload.SaveAs(MapPath(filePath)); // Update client with saved image path e.PostedUrl = Page.ResolveUrl(filePath); } } } Within the ImageUploadComplete event handler, you need to do two things: 1) Save the uploaded image (for example, to the file system, a database, or Azure storage) 2) Provide the URL to the saved image so the image can be displayed within the HtmlEditorExtender In the code above, the uploaded image is saved to the ~/Images folder. The path of the saved image is returned to the client by setting the AjaxFileUploadEventArgs PostedUrl property. Not surprisingly, under the covers, the HtmlEditorExtender uses the AjaxFileUpload. You can get a direct reference to the AjaxFileUpload control used by an HtmlEditorExtender by using the following code: void Page_Load() { var ajaxFileUpload = MyHtmlEditorExtender.AjaxFileUpload; ajaxFileUpload.AllowedFileTypes = "jpg,jpeg"; } The code above illustrates how you can restrict the types of images that can be uploaded to the HtmlEditorExtender. This code prevents anything but jpeg images from being uploaded. Summary This was the most difficult release of the Ajax Control Toolkit to date. We iterated through several designs for the AjaxFileUpload control – with each iteration, the goal was to make the AjaxFileUpload control easier for developers to use. My hope is that we were able to create a control which Web Forms developers will find very intuitive. I want to thank the developers on the Superexpert.com team for their hard work on this release.

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >