Search Results

Search found 13584 results on 544 pages for 'digital root'.

Page 49/544 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Signable, streamable, "readable" archive format?

    - by alexvoda
    Is there any archive format that offers the following: be digitally sign-able with a digital certificate from a trusted source like Verisign - for preventing changes to the file (I am not referring to read only, but in case the file was changed it should no longer be signed telling the user this is not the original file) be stream-able - be able to be opened even if not all of the content has been transfered (also not strictly linearly) be "readable" - be able to read the data without extracting to a temporary folder (AFAIK if you open a file in a zip archive it is extracted first, and this stays true even for zip based formats like OOXML. This is not what I want) be portable - support on at least Windows, Linux and Mac OS X is a must, or at least future support be free of patents - Be open source - also preferably a license that allows commercial use(as far as i know GPL a share-alike licence so it doesn't allow comercial use, BSD on the other hand alows it) Note: Though it may come in handy eventually I can not think right now of a scenario that would require both point 1 and point 2 simultaneously. Or lets leave it a be able to check the signature only when the whole file was downloaded. I am not interested in: being able to be compressed being supported on legacy systems Does any existing archive format fit this description (tar evolutions like DAR and pax come to mind) ? If there is, are there programing libraries available for the above mentioned OSs? If not, would it be hard to create such a thing? EDIT: clarrified piont 5 EDIT 2: added a note to clarify point 1 and 2 P.S.: This is my first question on StackOverflow

    Read the article

  • Digitally sign MS Office (Word, Excel, etc..) and PDF files on the server

    - by Sébastien Nussbaumer
    I need to digitally sign MS Office and PDF files that are stored on a server. I really mean a digital signature that is integrated in the document, according to each specific file formats. This is the process I had in mind : Create a hash of the file's content Send the hash to a custom written java applet in the browser The user encrypts the hash with his/her private key (on an usb token via PKCS#11 for example), thus effectively signing the file. The applet then sends the signature to the server On the server I would then incorporate the signature in the file's (MS Office and PDF files can do that without changing the file's content, probably by just setting some metadata field) What is cool is that you never have to download and upload the complete file to the server again. What is even cooler, the customer doesn't need Office or PDF Writer to sign the files. Parts 2, 3 and 4 are OK for me, my company bought all the JAVA technology I need for that for a previous project I worked on. Problem : I can't seem to find any documentation/examples to do parts 1 and 5 for Office files . Are my google skills failing me this time ? Do you have any pointers to documentation or examples for doing that for MS Office files ? The underlying technology isn't that important to me : I can use Java, .Net, COM, any working technology is OK ! Note : I'm 95% sure I can nail points 1 and 5 for PDF files using iText Thanks ** Edit : If I can't do that with hashes and must download the complete file to the client, it's also possible. But then I still need the documentation to be able to sign Office file... in java this time (from an applet)

    Read the article

  • What's up with LDoms: Part 2 - Creating a first, simple guest

    - by Stefan Hinker
    Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain.  We saw how resources were put aside for guest systems and what infrastructure we need for them.  With that, we are now ready to create a first, very simple guest domain.  In this first example, we'll keep things very simple.  Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest.  It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port.  CPU and RAM are easy.  The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain.  This is very much like plugging a cable into a computer system on one end and a network switch on the other.  For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak.  And then a mapping between that storage and the guest domain, giving it access to that virtual disk.  For this example, we'll use a ZFS volume for the backend.  We'll discuss what other options there are for this and how to chose the right one in a later article.  Here we go: root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars That's all, mars is now ready to power on.  There are just three commands between us and the OK prompt of mars:  We have to "bind" the domain, start it and connect to its console.  Binding is the process where the hypervisor actually puts all the pieces that we've configured together.  If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else).  Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP.  By default, the domain will then try to boot right away.  If we don't want that, we can set "auto-boot?" to false.  Finally, we'll use telnet to connect to the console of our newly created guest.  The output of "ldm list" shows us what port has been assigned to mars.  By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here. root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m mars active -t---- 5000 8 8G 12% 1s root@sun # telnet localhost 5000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok We're done, mars is ready to install Solaris, preferably using AI, of course ;-)  But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here: {0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root /virtual-devices@100/channel-devices@200/disk@0 net0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked.  Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started.  The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image.  The actual devices these aliases point to are shown with the command "devalias".  Here, we have one line for both "disk" and "net".  The device paths speak for themselves.  Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device.  These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk".  Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity.  This should be enough to get you going.  I will cover all the more advanced features and a little more theoretical background in several follow-on articles.  For some background reading, I'd recommend the following links: LDoms 2.2 Admin Guide: Setting up Guest Domains Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

    Read the article

  • Ruby on Rails app in root directory?

    - by Chris Leah
    Hey guys, new to rails, but I have found out how to create an app now through the shell but to create an app using rails appname would give me a url of http://url.com/appname/ but I want my app, to be within the route if you understand me, so it's just http://url.com/login/ or /signup or /play so on? So does anyone have any ideas how to do this, or why you can't or I shouldn't? Anything really, thanks guys!

    Read the article

  • .net XML serialization: how to specify an array's root element and child element names

    - by Jeremy
    Consider the following serializable classes: class Item {...} class Items : List<Item> {...} class MyClass { public string Name {get;set;} public Items MyItems {get;set;} } I want the serialized output to look like: <MyClass> <Name>string</Name> <ItemValues> <ItemValue></ItemValue> <ItemValue></ItemValue> <ItemValue></ItemValue> </ItemValues> </MyClass> Notice the element names ItemValues and ItemValue doesn't match the class names Item and Items, assuming I can't change the Item or Items class, is there any why to specify the element names I want, by modifying the MyClass Class?

    Read the article

  • getting base url of web site's root (absolute/relative url)

    - by uzay95
    I want to completely understand how to use relative and absolute url address in static and dynamic files. ~ : / : .. : in a relative URL indicates the parent directory . : refers to the current directory / : always replaces the entire pathname of the base URL // : always replaces everything from the hostname onwards This example is easy when you are working without virtual directory. But i am working on virtual directory. Relative URI Absolute URI about.html http://WebReference.com/html/about.html tutorial1/ http://WebReference.com/html/tutorial1/ tutorial1/2.html http://WebReference.com/html/tutorial1/2.html / http://WebReference.com/ //www.internet.com/ http://www.internet.com/ /experts/ http://WebReference.com/experts/ ../ http://WebReference.com/ ../experts/ http://WebReference.com/experts/ ../../../ http://WebReference.com/ ./ http://WebReference.com/html/ ./about.html http://WebReference.com/html/about.html I want to simulate a site below, like my project which is working on virtual directory. These are my aspx and ascx folder http://hostAddress:port/virtualDirectory/MainSite/ASPX/default.aspx http://hostAddress:port/virtualDirectory/MainSite/ASCX/UserCtrl/login.ascx http://hostAddress:port/virtualDirectory/AdminSite/ASPX/ASCX/default.aspx These are my JS Files(which will be use both with the aspx and ascx files): http://hostAddress:port/virtualDirectory/MainSite/JavascriptFolder/jsFile.js http://hostAddress:port/virtualDirectory/AdminSite/JavascriptFolder/jsFile.js this is my static web page address(I want to show some pictures and run inside some js functions): http://hostAddress:port/virtualDirectory/HTMLFiles/page.html this is my image folder http://hostAddress:port/virtualDirectory/Images/PNG/arrow.png http://hostAddress:port/virtualDirectory/Images/GIF/arrow.png if i want to write and image file's link in my ASPX file i should write aspxImgCtrl.ImageUrl = Server.MapPath("~")+"/Images/GIF/arrow.png"; But if i want to write the path hard coded or from javascript file, what kind of url address it should be?

    Read the article

  • Heap Dump Root Classes

    - by Adnan Memon
    We have production system going into infinite loop of full gc and memory drops form 8 gigs to like 1 MB in just 2 minutes. After taking heap dump it tells me there an is an array of java.lang.Object ([Ljava.lang.Object) with millions of java.lang.String objects having same String taking 99% of heap. But it doesn't tell me which class is referencing to this array so that I can fix it in the code. I took the heap dump using jmap tool on JDK 6 and used JProfiler, NetBeans, SAP Memory Analyzer and IBM Memory Analyzer but none of those tell me what is causing this huge array of objects?? ... like what class is referencing to it or contains it. Do I have to take a different dump with different config in order to get that info? ... Or anything else that can help me find out the culprit class causing this ... it will help a lot.

    Read the article

  • Root Path Problem in asp.net?

    - by Surya sasidhar
    hi, in my application a link button is there when i click on it it redirect to another page. But when i click on that page it is giving error like this... Cannot use a leading .. to exit above the top directory. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Cannot use a leading .. to exit above the top directory.

    Read the article

  • Selenium RC test - IE gives 403 error on Tomcat app, Tomcat root OK

    - by Ed Daniel
    I'm new to Selenium RC, having previously used Selenium IDE and only run tests in Firefox. I'm trying to get a basic test to run using Selenium RC through Eclipse; my test works OK in Firefox, and in Safari now that I've killed the pop-up blocker, but IE8 is causing a SeleniumException to be thrown, containing an "XHR ERROR" with a 403 response: com.thoughtworks.selenium.SeleniumException: XHR ERROR: URL = http://localhost:8080/pims Response_Code = 403 Error_Message = Forbidden at com.thoughtworks.selenium.HttpCommandProcessor.throwAssertionFailureExceptionOrError(HttpCommandProcessor.java:97) at com.thoughtworks.selenium.HttpCommandProcessor.doCommand(HttpCommandProcessor.java:91) at com.thoughtworks.selenium.DefaultSelenium.open(DefaultSelenium.java:335) at org.pimslims.seleniumtest.FirstTest.testNew(FirstTest.java:32) ... I can do a similar test on http:/ /localhost:8080 (space between the slashes here because SO thinks I'm spamming) and it's fine - I can make IE open that Tomcat default page and click a link. It's only if I try to open my application at http:/ /localhost:8080/pims that I see this error - and only in IE. I can open that URL in IE by typing it into the address bar. I was convinced that there's some setting in IE that's causing this, but I've tried everything I can think of. http:/ /localhost:8080 is in my Trusted Sites, and I've turned the security for that zone down to the minimum, allowed anything that looks related to popups, etc. If I try adding http:/ /localhost:8080/pims/ to Trusted Sites, IE says it's already there. I've also messed around with proxy settings, to no avail, but may have missed something obvious. I've tried starting the test with *iexplore, *iehta, and *iexploreproxy - all behave the same. Is there something I've missed? For reference, here is my test case - this works as is, in Firefox, opening the PIMS application's index page and clicking a link: public class FirstTest extends SeleneseTestCase { @Override public void setUp() throws Exception { this.setUp("http://localhost:8080/", "*firefox"); } public void testNew() throws Exception { final Selenium s = this.selenium; s.open("/pims"); s.click("logInOutLink"); s.waitForPageToLoad("30000"); } } Any help is greatly appreciated!

    Read the article

  • Background color for Tk in Python

    - by olofom
    I'm writing a slideshow program with Tkinter, but I don't know how to change the background color to black instead of the standard light gray. How can this be done? import os, sys import Tkinter import Image, ImageTk import time root = Tkinter.Tk() w, h = root.winfo_screenwidth(), root.winfo_screenheight() root.overrideredirect(1) root.geometry("%dx%d+0+0" % (w, h)) root.focus_set() root.bind("<Escape>", lambda e: e.widget.quit()) image = Image.open(image_path+f) tkpi = ImageTk.PhotoImage(image) label_image = Tkinter.Label(root, image=tkpi) label_image.place(x=0,y=0,width=w,height=h) root.mainloop(0)

    Read the article

  • objective-c Add to/Edit .plist file

    - by Dave
    Does writeToFile:atomically, add data to an existing .plist? Is it possible to modify a value in a .plist ? SHould the app be recompiled to take effect the change? I have a custom .plist list in my app. The structure is as below: <array> <dict> <key>Title</key> <string>MyTitle</string> <key>Measurement</key> <dict> <key>prop1</key> <real>28.86392</real> <key>prop2</key> <real>75.12451</real> </dict> <key>Distance</key> <dict> <key>prop3</key> <real>37.49229</real> <key>prop4</key> <real>58.64502</real> </dict> </dict> </array> The array tag holds multiple items with same structure. I need to add items to the existing .plist. Is that possible? I can write a UIView to do just that, if so. *EDIT - OK, I just tried to writetofile hoping it would atleast overwrite if not add to it. Strangely, the following code selects different path while reading and writing. NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; The .plist file is in my project. I can read from it. When I write to it, it is creating a .plist file and saving it in my /users/.../library/! Does this make sense to anyone?

    Read the article

  • How should i change the root for mod_rewrite url when i work in localhost

    - by Rajasekar
    I am working on a site maintainence. It uses mod_rewrite technique. But im new to mod_rewrite. How should i change the url to work correctly in my localhost. here's the code: # Enable mod_rewrite, start rewrite engine Options +FollowSymLinks RewriteEngine on rewritecond %{http_host} ^electricians4u.com.au [nc] rewriterule ^(.*)$ http://www.electricians4u.com.au/$1 [r=301,nc] ErrorDocument 404 /error404.php # for searching RewriteRule ^([^/]*)-in-([^/]*)\.htm$ /search.php?searchby=$1&SearchString=$2&search.x=$3&search.y=$4&search=Find+Agent [NC] # for nav RewriteRule ^electricians-in-([^/]*)-([^/]*)$ /search.php?SearchString=$1&state=&page=$2 [NC] # index page RewriteRule ^find-electrician-(.*)$ /find_electrician_in.php?state=$1 [NC,L] # find page RewriteRule ^electrician-(.*)-(.*)$ /find_electrician_in.php?state=$1&bspname=$2 [NC,L] # find page RewriteRule ^electricians-in-([^/]*)\.htm$ /search.php?state=$1&bspname=$2&locality=$3 [NC] Plz help. I know this silly question to ask. But i dont know other alternative.

    Read the article

  • Investigating the root cause behind SharePoint's "request not found in the TrackedRequests"

    - by Muhimbi
    We have a long standing issue in our bug tracking system about the dreaded "ERROR: request not found in the TrackedRequests. We might be creating and closing webs on different threads." message in SharePoint's trace log. As we develop Workflow software for the SharePoint market, we look into this issue from time to time to make sure it is not caused by our products. I have personally come to the conclusion that this is a problem in SharePoint, but perhaps someone else can prove me wrong. Here is what I know: According to the hundreds of search results returned by Google on this topic, this issue appears to be mainly related to SharePoint Workflows, both SharePoint Designer and Visual Studio based workflows. Assuming ULS logging is set to Monitorable, the easiest way to reproduce this problem is to create a new SharePoint Designer Workflow, attach it to a document library, set it to auto start on add/update, don't add any actions, save the workflow and upload a file to the document library. The error is only visible in the SharePoint trace log, it does not appear to impact the execution of the workflow at hand. I have verified that the problem occurs on 32 bit as well as 64 bit systems, Win2K3 and 2K8, WSS and MOSS with SharePoint versions up to the December 2009 Cumulative Update (6524). The problem does not occur when a workflow is started manually. There are dozens of related posts on MSDN Forums, hundreds on Google, one on StackOverflow and none on SharePoint Overflow. There appears to be no answer. Does anyone have any idea about what is going on, what is causing this and if we should worry or file this under 'Red Herrings'.

    Read the article

  • problem showing pictures stored outside web root folder

    - by David
    On a website users can upload pictures. For security reasons these are stored outside the webroot (public_html) folder. When I need to display the picture, I send the headers and have "readfile" read and output the picture data, like so: header("Pragma: public"); header("Expires: 0"); // set expiration time header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header('Content-type: image/jpg'); header('Content-Length: ' . $filesize); readfile($path_url . '/' . $photo); This works great, but the site is growing and this is starting to be a burden on the server. Question: is there a way to send the picture or picture data to the user, without the server first having to load the picture (obviously with the picture still being stored outside the webroot folder)? Thanks! David

    Read the article

  • Perl "Day too big" - root cause

    - by azp74
    I have been helping someone debug some code where the error message was "Day too big". I know that this springs from localtime and the Y2038 bug (most google results appear to be people dealing with cookies expiring well into the future). We appear to have 'fixed' the problem by using time to get the current date. However, given that none of our original dates should have hit the 2038 issue I'm sceptical that we've actually fixed the problem ... Are there other instances that anyone knows of where one would hit "day too big"?

    Read the article

  • How to include file outside document root?

    - by Brayn
    Hey, What I want do to is to include 'file1.php' from 'domain1' into 'file2.php' on 'domain2'. So what I figured I should do is something like this: file2.php require_once '/var/www/vhosts/domain1/httpdocs/file1.php'; But this won't work for reasons I can't truly grasp. So what I did was to add my path to the include path. Something like: file2.php set_include_path(get_include_path() . PATH_SEPARATOR . "/var/www/vhosts/domain1/httpdocs"); require_once 'file1.php'; So can you please give me some hints as of where I'm doing wrong ? Thanks UPDATE - Either way I get the following error message: Fatal error: require() [function.require]: Failed opening required '/var/www/vhosts/domain1/httpdocs/file1.php' (include_path='.:/php/includes:/usr/share/pear/') in /var/www/vhosts/domain2/httpdocs/file2.php on line 4 Also I have tried this both with safe_mode On and Off. UPDATE2: Also I've changed the permissions to 777 on my test file and I've double-checked the paths to the include file in bash.

    Read the article

  • Why are mercurial subrepos behaving as unversioned files in eclipse AND torotoiseHG

    - by noam
    I am trying to use the subrepo feature of mercurial, using the mercurial eclipse plugin\tortoiseHG. These are the steps I took: Created an empty dir /root cloned all repos that I want to be subrepos inside this folder (/root/sub1, /root/sub2) Created and added the .hgsub file in the root repo /root/.hgsub and put all the mappings of the sub repos in it using tortoiseHG, right clicked on /root and selected create repository here again with tortoise, selected all the files inside /root and added them to to the root repo commited the root repo pushed the local root repo into an empty repo I have set up on kiln Then, I pulled the root repo in eclipse, using import-mercurial. Now I see that all the subrepos appear as though they are unversioned (no "orange cylinder" icon next to their corresponding folders in the eclipse file explorer). Furthermore, when I right click on one of the subrepos, I don't get all the hg commands in the "team" menu as I usually get, with root projects - no "pull", "push" etc. Also, when I made a change to a file in a subrepo, and then "committed" the root project, it told me there were no changes found. I see the same behavior also in tortoiseHG - When I am browsing files under /root, the files belonging directly to the root repo have an small icon (a V sign) on them marking they are version controlled, while the subrepos' folders aren't marked as such. Am I doing something wrong, or is it a bug?

    Read the article

  • asp.net root paths

    - by dejavu
    I am getting the exception when trying to save a file: System.Web.HttpException: The SaveAs method is configured to require a rooted path, and the path '~/Thumbs/TestDoc2//small/ImageExtractStream.bmp' is not rooted. at System.Web.HttpPostedFile.SaveAs(String filename) at System.Web.HttpPostedFileWrapper.SaveAs(String filename) at PitchPortal.Core.Extensions.ThumbExtensions.SaveSmallThumb(Thumb image) in C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Core\Extensions\ThumbExenstions.cs:line 23 the code is below: public static void SaveSmallThumb(this Thumb image) { var logger = Microsoft.Practices.ServiceLocation.ServiceLocator.Current.GetInstance<ILoggingService>(); string savedFileName = HttpContext.Current.Server.MapPath(Path.Combine( image.SmallThumbFolderPath, Path.GetFileName(image.PostedFile.FileName))); try { image.PostedFile.SaveAs(savedFileName); } catch (Exception ex) { logger.Log(ex.ToString()); } } I cant see what is wrong here, any tips?

    Read the article

  • Problem with ActionScript 3.0 button to URL and root movieclip

    - by aarontb
    Okay, so, here's what the problem is. I'm creating a flash site with each page being it's own movieclip and Scene 1 being the menu and other things that stay on the site. I've created a MovieClip called 'HowWorksScene'. The movieclip has 2 buttons that link out to different URLs, however, I'm sure that when 1 of the button scripts work, the same script will work for the other...so here's the problem that I'm having with the Button stop(); VidDemo_btn.addEventListener(MouseEvent.CLICK, video); function video(event:MouseEvent):void { var link:URLRequest = new URLRequest('www.youtube.com'); navigateToURL(link); } Problem is that I cannot GET to that frame to even determine an error. The problem preventing me from getting to this point is a call function. In the "HomePage" movieclip, when the button is pressed to go to the next scene, "Homepage" fades out and flys left then the next frame is 1 frame but activates the next movieclipe "HowWorksScene"...but without errors, it simply goes to frame 17 of "Homepage". I've tried doing _root.gotoAndPlay(17); but get an undefined error. So, I guess my question is: What is the BEST way to direct from within a movieclip to a frame in the parent Scene? I've even tried using gotoAndPlay(17, "Scene 1"); And that still did not work. Please let me know ASAP!

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >