Search Results

Search found 75614 results on 3025 pages for 'file location'.

Page 149/3025 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Source code not matching uploaded HTML file

    - by benhowdle89
    I'm not sure if this is the right place to ask but i'm having a hugely frustrating problem with Coda and my website (i'm not sure which one is causing the issue) I'm using Coda to make changes to my website, Coda uses built in FTP to save changes to your web page. So when you hit Save, it uploads the new file. I've been using Coda for months and never had a problem until now. I am making changes in the html of my index.php and hitting save, it's successfully uploading the file but no changes are reflected in the source code in ANY browser. I even logged into cPanel on my website, ie. www.example.com:2082 and looked at the file - the changes have been made successfully. But the actual webpage in browser's source code, no changes?? I have tried adding which made no difference. Interestingly i make changes to style.css and the changes are instant. I have emptied the cache on all of my browsers but i'm still having an issue. Does this sound like a Coda problem or has anyone heard of such a thing?

    Read the article

  • Kubuntu 11.10 very slow during file I/O

    - by dko
    After updating to Kubuntu 11.10, my file I/O performance has slowly just gotten worse and worse. It is to the point where I'm getting 1 MB/s write/read speeds to the drive. If I download something, the whole machine becomes unresponsive for at times up to 30 seconds. This usually causes a timeout in the download and the download then stops. Even extracting archive files, while extracting the computer is just unusable on top of the terrible read/write speeds. It isn't the drive as I have Windows installed as well and when I boot to it I have no issues with the drive. I did not have this issue using Kubuntu 11.04 and am thinking of downgrading. However, I'd much rather help out the Ubuntu community by working through these issues. I'm starting to lean towards the new Linux Kernel is just not working well with file handles. During file I/O my system usage does pick up, but it is not 100% CPU usage. My system is as follows. Samsung 2 TB hard disk drive AMD Phenom II x6 1055 4 GB RAM (only one in use according to system monitor) ATI 5850 HD

    Read the article

  • File Activation in Windows RT

    - by jdanforth
    The code sample for file activation on MSDN is lacking some code so a simple way to pass the file clicked to your MainPage could be: protected override void OnFileActivated(FileActivatedEventArgs args) {     var page = new Frame();     page.Navigate(typeof(MainPage));     Window.Current.Content = page;       var p = page.Content as MainPage;     if (p != null) p.FileEvent = args;     Window.Current.Activate(); } And in MainPage: public MainPage() {     InitializeComponent();     Loaded += MainPageLoaded; } void MainPageLoaded(object sender, RoutedEventArgs e) {     if (FileEvent != null && FileEvent.Files.Count > 0)     {         //… do something with file     } }

    Read the article

  • Batch program not running correctly in Windows 7

    - by Jennifer Heidelberger
    I am currently trying to figure out how to get a batch file to run correctly in Windows 7. I have looked on the Internet and have not had much success in finding any useful information on the issue I am encountering. BATCH FILE – The batch file is to open a window to allow students to test on a TDSM server created by ETS eCBT – The test is a CLEP Exam. The batch file is to open the workstation for students to use and it looks like it loads but the Welcome/Login Screen never appears as it should. WSK_LOAD.BAT @echo off rem-------------------------------------------------------------------- rem !!! DO NOT REMOVE OR MODIFY THIS FILE !!! rem !!! THIS FILE IS USED BY THE eCBT SYSTEM !!! rem-------------------------------------------------------------------- SET ECBT_DEFAULT_SERVER_NAME=WR-TESTING1 SET ECBT_BATCH_HOME=C:\ETSBATCH SET ECBT_HOME=\\WR-TESTING1\tdms set ECBT_LOGFILE=%ECBT_BATCH_HOME%\wsk.log SET ECBT_CLIENT_VERSION=4.0 rem---------------------------------------------------------------------- if exist %ECBT_HOME%\client\bin\wks.bat goto avail echo Cannot access %ECBT_HOME%! echo Attempting to open the share … echo If you see the share window, please close it to proceed … rem------------------------------------------------------------------------ :avail %ECBT_HOME%\client\bin\wks.bat I have tried everything I can think of: Run as Administrator, Moved files to run from the HD, made sure all files and folders associated with the program were shared with users and computers, had Windows 7 run compatibility which says it does not contain an .exe file to run, and re-wrote the file. I know it is connecting to the TDMS server as I can see it on the server. The only thing it does not do is bring up the window which is necessary to login to the testing server. The window opens like it should but does not produce the login boxes. Any and all help is appreciated, Jennifer

    Read the article

  • Response.TransmitFile problem

    - by geoff
    I have the following code delivering a file to users when they click on a download link. For security purposes I can't just link directly to the file so this was set up to decode the url and transmit the file. It has been working fine for a while but recently I started having problems where the file will start downloading but there's no indication of how large the file is. Because of this when the download should stop, it doesn't. The file is about 99mb but when I download it, the browser just keeps downloading way beyond 100mb. I don't know what it's downloading but if I don't cancel it, it doesn't stop. So, my question is, is there either an alternative to transmitfile or a way to make sure the size of the file is sent also so that it stops at the right time? I don't want to use writefile because I don't want to load the entire file into memory since it's so large. Thanks. Here is the code: string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.TransmitFile(context.Server.MapPath(url)); context.Response.Flush();

    Read the article

  • Unit Testing XML independent of physical XML file

    - by RAbraham
    Hi, My question is: In JUnit, How do I setup xml data for my System Under Test(SUT) without making the SUT read from an XML file physically stored on the file system Background: I am given a XML file which contains rules for creation of an invoice. My job is to convert these rules from XMl to Java Objects e.g. If there is a tag as below in my XML file which indicates that after a period of 30 days, the transaction cannot be invoiced <ExpirationDay>30</ExpirationDay> this converts to a Java class , say ExpirationDateInvoicingRule I have a class InvoiceConfiguration which should take the XML file and create the *InvoicingRule objects. I am thinking of using StAX to parse the XML document within InvoiceConfiguration Problem: I want to unit test InvoiceConfiguration. But I dont want InvoiceConfiguration to read from an xml file physically on the file system . I want my unit test to be independent of any physical stored xml file. I want to create a xml representation in memory. But a StAX parser only takes FileReader( or I can play with the File Object)

    Read the article

  • Eclipse execute a file in jar

    - by hara
    Hi I'm developing an Eclipse product. In my plugin i have added an executable file (.exe) which i need to execute. When i launch the product from the eclipse, the file is referenced using the path of the executable on my file system and all went ok. When i export the product the plugin is packaged in a jar file which contains the executable file too. When it references the file the path is something like : $path_to_the_plugin_jar!/$path_to_exe_file_inside_the_jar Using this path to launch a new Process Thrown an exception. How can i refer to a file inside a jar?How could i execute the file? Can i extract the file when i export the product with eclipse? thanks

    Read the article

  • What is the root directory OR how do I set the directory in DotNetZip

    - by Chris
    where does DotNetZip get it's root directory for saving. All the save examples don't show the directory. My goal is to recurse a folder and subfolders. In each folder I want to zip all the files into one zip and delete the source files. private void CopyFolder(string srcPath, string dstPath) { if (!Directory.Exists(dstPath)) Directory.CreateDirectory(dstPath); string[] files = Directory.GetFiles(srcPath); string msg; string zipFileName; using (ZipFile z = new ZipFile(Path.Combine(srcPath,String.Format("Archive{0:yyyyMMdd}.zip", DateTime.Now)))) { z.ReadProgress += new EventHandler<ReadProgressEventArgs>(z_ReadProgress); foreach (string file in files) { FileInfo fi = new FileInfo(file); AddLog(String.Format("Adding {0}", file)); z.AddFile(file); } //z.Save(Path.Combine(srcPath, String.Format("Archive{0:yyyyMMdd}.zip", DateTime.Now))); z.Save(); if (deleteSource) { foreach (string file in files) { File.Delete(file); } } zipFileName = z.Name; } if (!compressOnly) File.Copy(Path.Combine(srcPath,zipFileName), Path.Combine(dstPath, Path.GetFileName(zipFileName))); string[] folders = Directory.GetDirectories(sourcePath); foreach (string folder in folders) { string name = Path.GetFileName(folder); string dest = Path.Combine(dstPath, name); Console.WriteLine(ln); log.Add(ln); msg = String.Format("{3}{4}Start Copy: {0}{4}Directory: {1}{4}To: {2}", DateTime.Now.ToString("G"), name, dest, ln, Environment.NewLine); AddLog(msg); if (recurseFolders) CopyFolder(folder, dest); msg = String.Format("Copied Directory: {0}{4}To: {1}\nAt: {2}{3}", folder, dest, DateTime.Now.ToString("G"), Environment.NewLine); AddLog(msg); } }

    Read the article

  • Why can't upload three files with FileUpload ?

    - by Valter Henrique
    Hi everyone, i'm trying to upload three images to my server, is working, but upload always the last file selected by the user, not the three selected. Here's my code: protected void doPost(HttpServletRequest request, HttpServletResponse response){ boolean multipart = ServletFileUpload.isMultipartContent(request); if (multipart) { DiskFileItemFactory fileItemFactory = new DiskFileItemFactory(); fileItemFactory.setSizeThreshold(5 * 1024 * 1024); //5 MB fileItemFactory.setRepository(tmpDir); ServletFileUpload uploadHandler = new ServletFileUpload(fileItemFactory); try { List items = uploadHandler.parseRequest(request); Iterator itr = items.iterator(); while (itr.hasNext()) { FileItem item = (FileItem) itr.next(); File file = new File(dir, generateNewName()); item.write(file); } } catch (FileUploadException ex) { } catch (Exception ex) { } } } -- UPDATE: <html> <head> <title>Upload</title> </head> <body> <form action="Upload" method="post" enctype="multipart/form-data"> <input type="file" name="file1" /> <br /> <input type="file" name="file2" /> <br /> <input type="file" name="file3" /> <br /> <input type="submit" value="Enviar" /> </form> </body> </html> Best regards, Valter Henrique.

    Read the article

  • Read entire file in Scala?

    - by Brendan OConnor
    What's a simple and canonical way to read an entire file into memory in Scala? (Ideally, with control over character encoding.) The best I can come up with is: scala.io.Source.fromPath("file.txt").getLines.reduceLeft(_+_) or am I supposed to use one of Java's god-awful idioms, the best of which (without using an external library) seems to be: import java.util.Scanner import java.io.File new Scanner(new File("file.txt")).useDelimiter("\\Z").next() From reading mailing list discussions, it's not clear to me that scala.io.Source is even supposed to be the canonical I/O library. I don't understand what its intended purpose is, exactly. ... I'd like something dead-simple and easy to remember. For example, in these languages it's very hard to forget the idiom ... Ruby open("file.txt").read Ruby File.read("file.txt") Python open("file.txt").read()

    Read the article

  • C# CF: file encryption/decryption on the fly

    - by nuttynibbles
    Hi, i've seen many article on encrypt/decrypt of file and typically a button is used to choose the file for encrypt and another button to decrypt the file. i've seen some application like truecrypt and probably others which does file encryption on-the-fly with transparent. this means that when a encrypted file is clicked to access, it will automatically decrypt and play/open the file. then when the file is closed, it will automatically encrypt again. some have said that the only way to detect file open is through file system filter. but is there other ways to do this in c# compact framework?

    Read the article

  • [iPhone]Objective C, objects which do not conform to NSCoding. How to write them to a file.

    - by Noah
    Hi, I am using an Objective c class, a subclass of NSObject. This class cannot be modified. I have an instance of this class that I wish to write to a file which can be retrieved and later reinstate. The object does not conform to NSCoding. To sum up, I need to save an instance of a class to a file which can be retrieved later, without using any of the NSCoding methods such as NSKeyedArchiving encodeWithCoder ... Using them returns this... NSInvalidArgumentException ...encodeWithCoder:] unrecognised selector sent to instance... Is there any other way I can store this object for later use Thank you

    Read the article

  • NSSavePanel selecting part of file name

    - by regulus6633
    How do I set the part of the file name that is selected in NSSavePanel? I only want the file name selected and not the file extension. Here's what I noticed. If I setAllowedFileTypes: for the save panel then only the file name is selected but not the file extension. However if I don't set the allowed file types then the file extension is selected along with the file name. I don't want to use setAllowedFileTypes but I still want to control the selection so that the file extension is not selected. Can that be done?

    Read the article

  • cutting a text file into multiple parts in emacs

    - by Gaurish Telang
    Hi I am using the GNU-Emacs-23 editor. I have this huge text file containing about 10,000 lines which I want to chop into multiple files. Using the mouse to select the required text to paste in another file is the really painful. Also this is prone to errors too. If I want to divide the text file according to the line numbers into say 4 file where first file:lines 1-2500 second file:lines 2500-5000 third file :lines 5000-7500 fourth file: lines: 7500-10000 how do I do this? At the very least, is there any efficient way to copy large regions of the file just by specifying line numbers

    Read the article

  • php not well formed?

    - by Idealflip
    Hi Everyone, when my site loads, it stops half way because of specific php code. When I try commenting out the php code, the whole page loads properly (input boxes, buttons etc.) This is the code that causes the issue <?php //if the add location button is clicked, the add location, the whole form will not be submitted if($_REQUEST['command'] == 'Add'){ if($_POST['companyLocation'] == ""){ $errmsg="Please enter a location1"; } elseif($_POST['companySize'] == ""){ $errmsg="Please enter a size for the location"; } else{ $location = Location::instance(); $testing = $location->doesExist($_POST['companyLocation']); if ($testing == true){ $errmsg="Location already exists"; } else{ $objLocation = new Obj_Location(); $objLocation->set_Name($_POST['companyLocation']); $objLocation->set_Size($_POST['companySize']); $location->addLocation($objLocation); $location->saveInstance(); } } } //this is the part that breaks! when I comment it out, the page loads properly. $location = Location::instance(); $location->deleteItem($_GET["item"]); $location->saveInstance(); $location->listItems(); ?>

    Read the article

  • How can I determine whether xml serializer can serialize a file

    - by ldsenow
    Hi, How can I determine whether xml serializer can serialize a file which is in a shared location and monitored by a file monitor. The file monitor has a list of parsers which are used to parse files in the shared folder. Once a file gets dropped into the folder, the file monitor will ask the registered parsers whether any of them can handle this file, if yes, the monitor will move the file out from the shared folder and assigns the task to the parser. Since some of the files are quite big, so I need to have a quick check on each parsers. How can I determine my xml parser can serialize the file without loading the full file into the memory?

    Read the article

  • Is there a more efficient way to do this?

    - by garethdn
    I'm hoping there is a better way to the following. I'm creating a jigsaw-type application and this is the current code i'm using: -(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; //location of current touch CGPoint location = [touch locationInView:self.view]; if ([touch view] == img1) { [self animateFirstTouch:img1 withLocation:location]; } else if ([touch view] == img2) { [self animateFirstTouch:img2 withLocation:location]; } else if ([touch view] == img3) { [self animateFirstTouch:img3 withLocation:location]; } else if ([touch view] == img4) { [self animateFirstTouch:img4 withLocation:location]; } else if { ...... ...... } else if ([touch view] == img40) { [self animateFirstTouch:img40 withLocation:location]; return; } } I'm hoping that there is a better, more efficieny way to do this, rather than naming every image. I'm thinking something like, if touch view is equal to a UIImageView, then perform some task. The same for touchesEnded: -(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; //location of current touch CGPoint location = [touch locationInView:self.view]; if ([touch view] == image1) { [self animateReleaseTouch:image1 withLocation:location]; } else if ([touch view] == image2) { [self animateReleaseTouch:image2 withLocation:location]; } else if ([touch view] == image3) { [self animateReleaseTouch:image3 withLocation:location]; } else if ([touch view] == image4) { [self animateReleaseTouch:image4 withLocation:location]; } else if{ ...... ...... } else if ([touch view] == image40) { [self animateReleaseTouch:image40 withLocation:location]; } return; } Any help please?

    Read the article

  • GNU Makefile: multiple outputs from single rule + preventing intermediate files from being deleted

    - by makesaurus
    This is sort of a continuation of question from link text. The problem is that there is a rule generating multiple outputs from a single input, and the command is time-consuming so we would prefer to avoid recomputation. Now there is an additional twist, that we want to keep files from being deleted as intermediate files, and rules involve wildcards to allow for parameters. The solution suggested was that we set up the following rule: file-a.out: program file.in ./program file.in file-a.out file-b.out file-c.out file-b.out: file-a.out @ file-c.out: file-b.out @ Then, calling make file-c.out creates both and we avoid issues with running make in parallel with -j switch. All fine so far. The problem is the following. Because the above solution sets up a chain in the DAG, make considers it differently; the files file-a.out and file-b.out are treated as intermediate files, and they by default get deleted as unnecessary as soon as file-c.out is ready. A way of avoiding that was mentioned somewhere here, and consists of adding file-a.out and file-b.out as dependencies of a target .SECONDARY, which keeps them from being deleted. Unfortunately, this does not solve my case because my rules use wildcard patters; specifically, my rules look more like this: file-a-%.out: program file.in ./program $* file.in file-a-$*.out file-b-$*.out file-c-$*.out file-b-%.out: file-a-%.out @ file-c-%.out: file-b-%.out @ so that one can pass a parameter that gets included in the file name, for example by running make file-c-12.out The solution that make documentation suggests is to add these as implicit rules to the list of dependencies of .PRECIOUS, thus keeping these files from being deleted. The solution with .PRECIOUS works, but it also prevents these files from being deleted when a rule fails and files are incomplete. Is there any other way to make this work?

    Read the article

  • Resource File Links

    - by EzaBlade
    When you create a Silverlight Business Application you get a Silverlight application and a Web application. In the Web/Resources folder of the Silverlight app there are links to the files in the Resources folder of the Web app. These links are exactly like the files they link to in that they are heirarchical with the Resource.Designer.cs file shown as a code-behind file for the Resource.resx file When I try to link to a Resource file in this way I only get the .resx file unless I link to the .Designer.cs file separately. However in this case the Designer.cs file is then shown as a standard code file and not related to the .resx file. Does anyone know how to do this linking correctly?

    Read the article

  • Is modifying a file without writing a new file possible in c++?

    - by Phenom
    Let's say I have a text file that is 100 lines long. I want to only change what is in the 50th line. One way to do it is open the file for input and open a new file for output. Use a for-loop to read in the first half of the file line by line and write to the second file line by line, then write what I want to change, and then write out the second half using a for-loop again. Finally, I rename the new file to overwrite the original file. Is there another way besides this? A way to modify the contents in the middle of a file without touching the rest of the file and without writing everything out again? If there is, then what's the code to do it?

    Read the article

  • problem using fprintf

    - by shiran bar
    I'm trying to print to a text file numerous variables yet it doesn't work. I checked and verified that i write it in the correct syntax. I also checked the return value and it's positive therefore i know it did write to the file, however when i open the file it's empty. I would be happy for some help. This is the code: I initiate DynsaleDayPtr in the main: FILE* DynsaleDayPtr = CreateTextFiles("sale_day.txt"); Create function: FILE* CreateTextFiles (char* fileName) { FILE* saleFilePtr=NULL; if((saleFilePtr=fopen(fileName,"a+"))==NULL) printf("File couldn't be opened\n"); return saleFilePtr; } The call to the function TextAddSale is done from a function that is called in the main: TextAddSale(DynSaleDayPtr,dynNumOfRecords); Bool TextAddSale (FILE* DynsaleDayPtr, int* dynNumOfRecords) { char id[6]; char name [50]; char priceChar[20]; char* tmp = NULL; int price=-1; DynamicRecord * newRec=NULL; scanf("%s%s%s",id,name,priceChar); newRec = (DynamicRecord *)malloc(sizeof(DynamicRecord)); if (newRec == NULL) return False; tmp = (char*)malloc(strlen(name)+1); if (tmp == NULL) { free (newRec); return False; } strcpy(tmp,name); newRec->productName = tmp; strcpy(newRec->productId, id); newRec->productPrice=atoi (priceChar); if (fprintf(DynsaleDayPtr,"%d %s %s %d", strlen(newRec->productName), newRec->productId, newRec->productName, newRec->productPrice)>0) { *dynNumOfRecords=(*dynNumOfRecords)+1; return True; } } thanks!

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • "ERROR:Could not find java.nio.file.Paths" when using Oracle JDK 1.7

    - by Ankit
    I want to try out some features rolled out in Oracle's new JDK 1.7. I followed the post:- Oracle JDK 1.7 but the post doesn't seem to help. I was trying to fetch out the structure for java.nio.file.Paths class file but got the following error:- buffer@ankit:~$ javap java.nio.file.Paths ERROR:Could not find java.nio.file.Paths However i can easily get the information about class structures till JAVA SE 1.6, here is an example:- buffer@ankit:~$ javap java.lang.Object Compiled from "Object.java" public class java.lang.Object{ public java.lang.Object(); public final native java.lang.Class getClass(); public native int hashCode(); public boolean equals(java.lang.Object); protected native java.lang.Object clone() throws java.lang.CloneNotSupportedException; public java.lang.String toString(); public final native void notify(); public final native void notifyAll(); public final native void wait(long) throws java.lang.InterruptedException; public final void wait(long, int) throws java.lang.InterruptedException; public final void wait() throws java.lang.InterruptedException; protected void finalize() throws java.lang.Throwable; static {}; } Running java -version gives the following result:- buffer@ankit:~$ java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) SYSTEM INFORMATION buffer@ankit:~$ sudo update-alternatives --config java [sudo] password for buffer: There are 4 choices for the alternative java (providing /usr/bin/java). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1051 manual mode 3 /usr/lib/jvm/jdk1.7.0_09/ 1 manual mode * 4 /usr/lib/jvm/jdk1.7.0_09/bin/java 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javac There are 2 choices for the alternative javac (providing /usr/bin/javac). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 manual mode * 2 /usr/lib/jvm/jdk1.7.0_09/bin/javac 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javaws There are 3 choices for the alternative javaws (providing /usr/bin/javaws). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws 1060 manual mode * 3 /usr/lib/jvm/jdk1.7.0_09/bin/javaws 1 manual mode The directory structure of /usr/lib/jvm/ is as follows:- buffer@ankit:~$ ls -l /usr/lib/jvm/ total 24 lrwxrwxrwx 1 root root 24 Dec 2 2011 default-java -> java-1.6.0-openjdk-amd64 drwxr-xr-x 4 root root 4096 Nov 8 16:24 java-1.5.0-gcj-4.6 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-1.6.0-openjdk -> java-1.6.0-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 00:01 java-1.6.0-openjdk-amd64 -> java-6-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 06:59 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-6-openjdk -> java-1.6.0-openjdk-amd64 drwxr-xr-x 7 root root 4096 Nov 8 16:24 java-6-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 16:24 java-6-openjdk-common drwxr-xr-x 5 root root 4096 Nov 8 05:48 java-7-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 05:48 java-7-openjdk-common drwxr-xr-x 8 buffer buffer 4096 Sep 25 09:08 jdk1.7.0_09 Any help would be highly appreciated.

    Read the article

  • How can I filter images and use filesystemview icons in my jtree?

    - by HoLeX
    First of all sorry about my english. So, i have some problems with my JTree because i want to filter specific types of images and also i want to use icons of FileSystemView class. Can you help me? I will appreciate so much. Here is my code: import java.awt.BorderLayout; import java.io.File; import java.util.Iterator; import java.util.Vector; import javax.swing.JPanel; import javax.swing.JTree; import javax.swing.event.TreeModelEvent; import javax.swing.event.TreeModelListener; import javax.swing.event.TreeSelectionEvent; import javax.swing.event.TreeSelectionListener; import javax.swing.tree.TreeModel; import javax.swing.tree.TreePath; public class ArbolDirectorio extends JPanel { private JTree fileTree; private FileSystemModel fileSystemModel; public ArbolDirectorio(String directory) { this.setLayout(new BorderLayout()); this.fileSystemModel = new FileSystemModel(new File(directory)); this.fileTree = new JTree(fileSystemModel); this.fileTree.setEditable(true); this.fileTree.addTreeSelectionListener(new TreeSelectionListener() { @Override public void valueChanged(TreeSelectionEvent event) { File file = (File) fileTree.getLastSelectedPathComponent(); System.out.println(getFileDetails(file)); } }); this.add(fileTree, BorderLayout.CENTER); } private String getFileDetails(File file) { if (file == null) { return ""; } StringBuffer buffer = new StringBuffer(); buffer.append("Name: " + file.getName() + "\n"); buffer.append("Path: " + file.getPath() + "\n"); return buffer.toString(); } } class FileSystemModel implements TreeModel { private File root; private Vector listeners = new Vector(); public FileSystemModel(File rootDirectory) { root = rootDirectory; } @Override public Object getRoot() { return root; } @Override public Object getChild(Object parent, int index) { File directory = (File) parent; String[] children = directory.list(); return new TreeFile(directory, children[index]); } @Override public int getChildCount(Object parent) { File file = (File) parent; if (file.isDirectory()) { String[] fileList = file.list(); if (fileList != null) { return file.list().length; } } return 0; } @Override public boolean isLeaf(Object node) { File file = (File) node; return file.isFile(); } @Override public int getIndexOfChild(Object parent, Object child) { File directory = (File) parent; File file = (File) child; String[] children = directory.list(); for (int i = 0; i < children.length; i++) { if (file.getName().equals(children[i])) { return i; } } return -1; } @Override public void valueForPathChanged(TreePath path, Object value) { File oldFile = (File) path.getLastPathComponent(); String fileParentPath = oldFile.getParent(); String newFileName = (String) value; File targetFile = new File(fileParentPath, newFileName); oldFile.renameTo(targetFile); File parent = new File(fileParentPath); int[] changedChildrenIndices = { getIndexOfChild(parent, targetFile) }; Object[] changedChildren = { targetFile }; fireTreeNodesChanged(path.getParentPath(), changedChildrenIndices, changedChildren); } private void fireTreeNodesChanged(TreePath parentPath, int[] indices, Object[] children) { TreeModelEvent event = new TreeModelEvent(this, parentPath, indices, children); Iterator iterator = listeners.iterator(); TreeModelListener listener = null; while (iterator.hasNext()) { listener = (TreeModelListener) iterator.next(); listener.treeNodesChanged(event); } } @Override public void addTreeModelListener(TreeModelListener listener) { listeners.add(listener); } @Override public void removeTreeModelListener(TreeModelListener listener) { listeners.remove(listener); } private class TreeFile extends File { public TreeFile(File parent, String child) { super(parent, child); } @Override public String toString() { return getName(); } } }

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >