Search Results

Search found 37883 results on 1516 pages for 'sparse files'.

Page 45/1516 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Apache htaccess results in files being downloaded instead of displayed

    - by chrissik
    So I had this "beautiful" website that did exactly what I wanted it to do. Then I shut down my PC, reboot and...the pages just download now instead of being displayed. I re-installed XAMPP and launched Apache again and I was able to identify the .htaccess file as the cause of the problem. Options +FollowSymlinks RewriteEngine on RewriteCond %{QUERY_STRING} !^desktop RewriteCond %{HTTP_USER_AGENT} "android|blackberry|googlebot-mobile|iemobile|iphone|ipod|#opera mobile|palmos|webos" [NC] RewriteRule ^/?$ /mobile/index [L,R=302] RewriteRule ^/?$ /de/index [R] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(.*)$ $1.html Here is the problem I guess: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(.*)$ $1.html This should make it possible to use /de/index instead of /de/index.html - but somehow it causes the page to download if I open localhost/de/index (but with localhost/de/index.html it works fine...). I'm using HTML Sites with SSI Elements on a Apache web server. The only other file that is different to the out-of-the-box ones is the httpd.conf, where I enabled SSI: AddType text/html .shtml AddHandler server-parsed .shtml AddHandler server-parsed .html AddHandler server-parsed .htm Options Indexes FollowSymLinks Includes AddOutputFilter INCLUDES .shtml Options +Includes So I hope there is somebody among you that can help me with this annoying problem as I'm quite desperate... for some reason, even without the problematic lines Chrome keeps downloading the files (even if I delete the .htaccess file), while IE and Opera display the pages. Edit: Now Opera also wants to download files (whether index.html or index are called).

    Read the article

  • How do I load tmx files with Slick2d?

    - by mbreen
    I just started using Slick2D and learned how simple it is to load in a tilemap and display it. I tried atleast a dozen different tmx files from numerous examples to see if it was the actual file that was corrupted. Everytime I get this error: Exception in thread "main" java.lang.RuntimeException: Resource not found: data/maps/desert.tmx at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.tiled.TiledMap.<init>(TiledMap.java:101) at game.Game.init(Game.java:17) at game.Tunneler.initStatesList(Tunneler.java:37) at org.newdawn.slick.state.StateBasedGame.init(StateBasedGame.java:164) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:390) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at game.Tunneler.main(Tunneler.java:29) Here is my Game class: package game; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import org.newdawn.slick.tiled.TiledMap; public class Game extends BasicGameState{ private int stateID = -1; private TiledMap map = null; public Game(int stateID){ this.stateID = stateID; } public void init(GameContainer container, StateBasedGame game) throws SlickException{ map = new TiledMap("data/maps/desert.tmx","maps");//ERROR } public void render(GameContainer container, StateBasedGame game, Graphics g) throws SlickException{ //map.render(0,0); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException{ } public int getID(){return stateID;} } I've tried to see if anyone else has had similar problems but haven't turned up anything. I am able to load other files, so I don't believe it's a compiler issue. My menu class can load images and display them just fine. Also, the filepath is correct. Please let me know if you have any pointers that might help me sort this out.

    Read the article

  • pdflatex reads .eps files saved in OS/X, but not in Ubuntu

    - by David B Borenstein
    Sorry if this is a stupid question; I'm a newbie. I am preparing a manuscript in LaTeX. The journal (Physical Biology, an IOP publication) requires that figures be saved in .eps format, so I am trying to do that. However, I cannot get my LaTeX file to build when I have generated the .eps files on my Ubuntu computer. If I save the images on my Mac, the file build just fine. So far, I have tried saving images in ImageJ, FIJI and Inkscape. The same problem occurs in all three. When using kile, I get the following error: /usr/share/texmf-texlive/tex/latex/oberdiek/epstopdf-base.sty:0: Shell escape feature is not enabled. In TexWorks, the error is different, but still there: Package pdftex.def Error: File `./figures4/figure4a-eps-converted-to.pdf' not found. Now, if I fire up Inkscape, FIJI or ImageJ on OS/X, everything works fine. The Mac also can't build with the Ubuntu-saved images. The images generated on the Ubuntu machine open fine using Document Viewer. I am building the same LaTeX file on both computers, with the exact same results. The header of my LaTeX file is: \documentclass[12pt]{iopart} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{parskip} \usepackage{color} \usepackage{iopams} And then the code for the figure is: \begin{figure} \center{\includegraphics[width=4in] {./figures4/figure4a.eps}} \footnotesize{\caption{ \label{fig:4a} (4a) lorem ipsum dolor sic amet.}} \end{figure} I'd be happy to send an example of both .eps files. Again, sorry if this is a dumb question. I tried everything I could think of before posting here. Thanks, David

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Search inside Xournal files (.xoj)

    - by Javad Sadeqzadeh
    I'm a big fan of Evernote, I use it regularly. However, it has a 60MB storage limit (although text files are not going to occupy much space, but the limitation concern still remains). Today, I installed Xournal, which has great features like annotating, nice background, free hand shapes and notes, save in PDF format, and many more. But the big problem is that as far as I've noticd, there is no intrinsic feature for seach inside the notes (created using Xournal with .xoj suffix). I used Catfish File Search application (which creates bash commands for full text search), but it couldn't help as well. Is there anyway to search inside a .xoj file at all? If so, it could be a suitable alternative to evernote, if you put your .xoj files on a cloud (which certainly offers you much more storage space than 60MB). If not, is there any other convenient app similar to Evernote, but with higher storage limit or without a limit? Somebody suggested Zim desktop wiki app, which looks great, but I'm nut sure if I could copy and paste everything there (a mixture of photos and tables and text with various formats and highlights), like what I do with Evernote. And a very useful tool I use is Evernote Web Clipper (browser extension). Of course, having a desktop client like Everpad is a plus, but not the absolute need. PS: I use pocket, so please do suggest that (it only preserve links (which might change over time) not the actual text). I also use google drive or docs, I don't like that for this purpose niether, it's too slow, doesn't have a browser extension and a desktop client. Thank you so much in advance.

    Read the article

  • Removing specific part of filename (what's after the second dash) for all files in a folder

    - by Bodo
    I use the command line utility youtube-dl to download videos from YouTube and make mp3s from them with avconv. I'm doing this under Ubuntu 14.04 and very happy with it. The utility downloads the files and saves them with the following name scheme: TITLE(artist-track)-ID.mp3 So an actual filename looks like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Some other file names in the folder look like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 Stromae - Papaoutai-oiKj0Z_Xnjc.mp3 At first, this was no problem. It didn't bother me while listening to my music in Rhytmbox. But when moving to phone or other devices it is pretty confusing to see a so long name, and some players, like the Samsung ones, treat that last part (id after second dash) of the name as Album or something. I'd like to create a bash script that removes what's after the second dash in the name for all files, so it'll make them like this: From: Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 To: Martin Garrix - Animals (Official Video).mp3 Is it also possible to instruct youtube-dl to exclude the ID from now on? I am currently downloading with the command: youtube-dl --extract-audio --audio-quality 0 --audio-format mp3 URL

    Read the article

  • Lost files after installing Ubuntu

    - by Joshua Rosato
    I installed Ubuntu on my laptop over windows, I had 2 partitions on one hard disk. It seems like my second partition is gone with all my files. How can I recover the old files? They weren't on the same partition as Windows. I read that the partition has probably just not been mounted so ran sudo fdisk -l to find all the partitions and then ran sudo mount, however I can't tell from the results of sudo mount what is not mounted and I am also unsure how to mount it once I find the unmounted partition. sudo fdisk -l - Results Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002c6dc Device Boot Start End Blocks Id System /dev/sda1 * 2048 486322175 243160064 83 Linux /dev/sda2 486324222 488396799 1036289 5 Extended /dev/sda5 486324224 488396799 1036288 82 Linux swap / Solaris sudo mount - Results /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=joshy1)

    Read the article

  • Web browser downloads only open target folders - cannot open files

    - by Pavlos G.
    After installing xubuntu packages in order to check xfce, I reverted back to gnome2. During the first login, I noticed that thunar was now selected as the default file manager. Preferred applications menu is also missing now, so I could not set nautilus as the default. I removed all the xubuntu packages (including thunar) and then when I tried to open a folder, I was asked to select the default file manager - that's how I got nautilus back. The next problem I'm now facing has to do with the downloaded files from web browsers: Open and Open containing folder options produce exactly the same result. If I double-click on a file, it'll just open the containing folder, instead of opening the file with it's associated application (e.g. libreoffice writer for .doc,.odt, smplayer for .avi,.wmv, etc). The problem happens both in Firefox and Chrome. Through nautilus, all files open correctly. Up until now I've tried the following: Delete/recreate mimeTypes.rdf in my FF profile Create a new profile in FF Delete/recreate ~/.local/share/applications/mimeapps.list Already checked this similar article None of them worked. Any ideas on the issue would be appreciated.

    Read the article

  • Keeping files that are often changed in sync between desktop and laptop

    - by N.N.
    I'm looking for a way to keep a desktop and a laptop in sync. What I want to keep in sync are some folders, mainly ~/Documents, that are changed often when working on them. If it matters I can connect to my desktop from anywhere via an URL but my laptop is harder to access since it might be behind NAT and such. I have been looking at Ubuntu One but it seems to not go well with working on documents written in LaTeX. If I work on a .tex file in the Ubuntu One directory and compile it (with pdflatex) every now and then (as often as every 10 sec when working) it will create several new files including a pdf which are uploaded to Ubuntu One and this seems stupid since it will create continuous upload when working on .tex files. I also usually keep .tex documents version controlled by git and then every commit (which also can happen frequently) will cause upload (by changes in ./.git) so that it happens continuously when working. Another example is editing images that are saved often. What I think would be best is for sync to happen every tenth minute or at the end of every working session (but there might be some other way to handle this?).

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • FTP gives me a error when uploading and deleting files [on hold]

    - by AR Games
    Here's the error I get when trying to delete files... Command: DELE index.html Response: 550 Delete operation failed. Here's the error I get when trying to upload files... Command: OPTS UTF8 ON Response: 200 Always in UTF8 mode. Status: Connected Status: Starting upload of C:\wamp\www\.DS_Store Command: CWD /var/www/html Response: 250 Directory successfully changed. Command: TYPE A Response: 200 Switching to ASCII mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,78,222). Command: STOR .DS_Store Response: 553 Could not create file. Error: Critical file transfer error Status: Retrieving directory listing... Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,23,94). Command: LIST Response: 150 Here comes the directory listing. Response: 226 Directory send OK. Status: Directory listing successful Response: 421 Timeout. Error: Connection closed by server Status: Disconnected from server IM running windows OS and using filezilla FTP client

    Read the article

  • Convert uploaded video files to mp4 using PHP [closed]

    - by Subin
    I created a PHP video uploading script. I need to convert these files to mp4 for HTML5 VIDEO PLAYER using PHP while uploading . How can I do that ? Here is the PHP code. <?php if(isset($_POST['submit'])){ $user=$_COOKIE['VisitorName']; include('config.php'); session_start(); $session_id='1'; //$session id $path = "/home/simsu/subins/videos/data/videos/"; $valid_formats = array("wmv", "ogv", "mp4", "3gp", "ogg"); if(isset($_POST) and $_SERVER['REQUEST_METHOD'] == "POST") { $name = $_FILES['uploadedfile']['name']; $size = $_FILES['uploadedfile']['size']; if(strlen($name)) { list($txt, $ext) = explode(".", $name); if(in_array($ext,$valid_formats)) { if($size<(100024*100024)) { $actual_image_name = $path.time().".mp4"; $tmp = $_FILES['uploadedfile']['tmp_name']; $upurl="http://vtube.subins.com/files/video?vid=".time(); $title=$_POST['vn']; mysql_query("INSERT INTO videos(title,user,url,vid,ext) VALUES ('$title', '$user','$upurl',NOW(),'$ext')"); echo '<br><h1>'.$_FILES['uploadedfile']['name'] . " uploaded.</h1>"; } else echo "<br><h1>Video file size max 100 MB"; } else echo "<br><h1>Invalid file format.."; } else echo "<br><h1>Please select a video..!"; exit; } } ?>

    Read the article

  • TraceTune supports uploading Zip files

    - by Bill Graziano
    I’ve been using the online version of ClearTrace more and more lately.  When I get to a new client it’s just much easier to upload a trace file rather than install ClearTrace. That means I’ve finally been adding more features to it.  The two latest features are around ease of use. You can now upload a ZIP file that contains a trace file.  Trace files are already somewhat compressed.  Putting it in a ZIP file further compresses it by a factor of 8X or 9X in my testing. That means you can start with a 100MB trace and end up with a 10Mb-12MB ZIP file to upload.  I’m consistently able to get over 150,000 events in a 100MB ZIP file.  That gives me a pretty good look at a system. The second part of this is that files are now processed asynchronously.  After you upload a file you’ll be taken to a processing page that updates every few seconds with the number of rows processed.  It generally takes under a minute to process a 100MB trace file but I *hated* staring at a blank screen. Give TraceTune a try.  It’s getting easier to use every day.

    Read the article

  • accessing files on a shared folder via IIS

    - by Darkcat Studios
    Im not sure if this suits stackexchange, serverfault or here... so i'll go with here for a start. I'm having issues setting up a network share to be accessed by IIS, all I need to do is read/write files on the Other server. We have 2 servers set up (Both 2008 R2 & IIS 7.5), one is the WEB server, which is externally accessible and NOT part of the domain. We also have an Intranet server which has no internet connectivity and is part of the domain. These 2 servers can talk to each other happily, I have the SQL server on the WEB server shared across to the intranet server so that the web content is editable from the intranet. I can share a folder on the web server (say, wwwroot/Images/) and connect to it from the intranet server, even have it as a mapped drive (but i know thats not going to work for IIS to access it), So there seems not to be a connectivity issue. I can also set up a Virtual folder in IIS on the Intranet server - this is where it gets annoying - I cant connect using pass-through authentication because there is no suitable user on the web server (which is not on the domain). If i set up a user on the web server, eg Intranet_USR, and give it appropriate rights to the folder, files and share, i can connect, but only view folder contents in IIS, not read, although that user has read privileges!! Any help much appreciated!

    Read the article

  • Files in /home deleted

    - by long-time user....2006
    In the most specific, unemotional terms: Reinstalled os, using 11.10(1 month after release to skip initial issues that usually crop up). Configured system to my specifications(just ways of organizing config files, etc). Log out Log back in after after an hour or so...to find my home directory obliterated and just a few skeleton files existing. think oh well, try again (this has happened before with an install for reasons I've never been able to pinpoint, usually around install time with some sort of update but its never been a major recurring issue) same thing happens I thought something was awry, so I reinstalled again (another 20 minutes, meh) Set up system, arranged home directory a bit differently thinking maybe I tread on something I shouldn't have. log out, come back --- the same thing. Most of the directories I added were deleted (e.g. .xmonad which links to xmonad.hs in my portable config directory) tl;dr every change I make in my home directory gets deleted. The emotional part: UNACCEPTABLE. I need to configure my system the way I want, not get punched in the face everytime I make a change. I'll willingly fill in details as needed, this was just a start to see if anyone can help, I've found no trace of this issue in a search.

    Read the article

  • Re-installing Ubuntu without losing files, how to?

    - by moraleida
    Sometime back i bought a second PC to serve as my backup machine, but i've never managed to have it as i would like. Now i want to start over, but i've messed so much with it's disks that i'm kinda afraid to lose something on the way, thus this question. Right now, I have a 1Tb disk partitioned like this (as per GParted): /dev/sda1 (ext4) 346.12Gb - Is almost full, has an old install of Ubuntu 11.10. It no longer boots, ever since i installed Windows7 on /sda3. Everything that matters to me is tucked into /var/www/ all the rest can just go. /dev/sda2 (ext4) 196.45Gb - has an old install of 12.04 and nothing important, it's pretty much empty and also doesn't boot. /dev/sda3 (ntfs) 377.97Gb - is my boot partition with Windows 7, some important files and I'd like to keep it untouched. /dev/sda4 (extended) 10.97Gb - was created when i first installed Ubuntu, i think. In my ideal world, I'd like to safely reinstall Ubuntu from the 12.04 liveUSB and merge sda1 and sda2 without losing any files. Is that possible? How?

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • Swt file dialog too much files selected?

    - by InsertNickHere
    Hi there, the swt file dialog will give me an empty result array if I select too much files (approx. 2500files). The listing shows you how I use this dialog. If i select too many sound files, the syso will show 0. Debugging tells me, that the files array is empty in this case. Is there any way to get this work? FileDialog fileDialog = new FileDialog(mainView.getShell(), SWT.MULTI); fileDialog.setText("Choose sound files"); fileDialog.setFilterExtensions(new String[] { new String("*.wav") }); Vector<String> result = new Vector<String>(); fileDialog.open(); String[] files = fileDialog.getFileNames(); for (int i = 0, n = files.length; i < n; i++) { if( !files[i].contains(".wav")) { System.out.println(files[i]); } StringBuffer stringBuffer = new StringBuffer(); stringBuffer.append(fileDialog.getFilterPath()); if (stringBuffer.charAt(stringBuffer.length() - 1) != File.separatorChar) { stringBuffer.append(File.separatorChar); } stringBuffer.append(files[i]); stringBuffer.append(""); String finalName = stringBuffer.toString(); if( !finalName.contains(".wav")) { System.out.println(finalName); } result.add(finalName); } System.out.println(result.size()) ;

    Read the article

  • LuaEdit can't find module when Lua files all in the same folder

    - by joverboard
    I downloaded LuaEdit to use as an IDE and debug tool however I'm having trouble using it for even the simplest things. I've created a solution with 2 files in it, all of which are stored in the same folder. My files are as follows: --startup.lua require("foo") test("Testing", "testing", "one, two, three") --foo.lua foo = {} print("In foo.lua") function test(a,b,c) print(a,b,c) end This works fine when in my C++ compiler when accessed through some embed code, however when I attempt to use the same code in LuaEdit, it crashes on line 3 require("foo") with an error stating: module 'foo' not found: no field package.preload['foo'] no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo\init.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo\init.lua' no file '.\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.dll' no file 'C:\Program Files (x86)\LuaEdit 2010\loadall.dll' no file '.\battle.dll' I have also tried creating these files prior to adding them to a solution and still get the same error. Is there some setting I'm missing? It would be great to have an IDE/debugger but it's useless to me if it can't run linked functions.

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Subversion and Quickbooks Files

    - by Jorge Fernandez
    I currently have a large problem on one of the file servers I manage for an Accounting Firm. Quickbooks has a tendency to create multiple files of the same thing over and over to prevent data loss. This is a good thing when you handle just a few files. But at an accounting firm it becomes a problem. Some of the older clients have 5-10 files in their respective folders, each with a different cut off date. Because of user error some of these file aren't labeled properly with their correct cutoff dates. This is where Subversion came to mind. Using the revision system would allow for 1 file to be master and have all of its revisions. Has anyone ever tried this with Quickbooks files? I've only used SVN with code for applications making each file size much smaller. How does SVN stand up with larger files like 10-25MB? I'm not exactly sure how SVN handles revisions - does it keep a duplicate of the files and duplicates the disk space space needed?

    Read the article

  • How to Protect Sensitive (HIPAA) SQL Server Standard Data and Log Files

    - by Quesi
    I am dealing with electronic personal health information (ePHI or PHI) and HIPAA regulations require that only authorized users can access ePHI. Column-level encryption may be of value for some of the data, but I need the ability to do like searches on some of the PHI fields such as name. Transparent Data Encryption (TDE) is a feature of SQL Server 2008 for encrypting database and log files. As I understand it this prevents someone who gains access to the MDF, LDF, or backup files from being able to do anything with the files because they are encrypted. TDE is only on enterprise and developer versions of SQL Server and enterprise is cost-prohibitive for my particular scenario. How can I get similar protection on SQL Server Standard? Is there a way to encrypt the database and backup files (is there a third-party tool)? Or just as good, is there a way to prevent the files from being used if the disk were attached to another machine (linux or windows)? Administrator access to the files from the same machine is fine, but I just want to prevent any issues if the disk were removed and hooked up to another machine. What are some of the solutions for this that are out there?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >