Search Results

Search found 59686 results on 2388 pages for 'data dump'.

Page 144/2388 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Programming Pearls (2nd Edition) vs More Programming Pearls: Confessions of a Coder [closed]

    - by Geek
    I have been reading very good reviews of the books by Jon Bentley : Programming Pearls (2nd Edition) More Programming Pearls: Confessions of a Coder. I know that these books have been out there for a long time and I feel bad that I haven't read either one . But it is always better late than never . I understand that the second one was written after the first one . So are these two books complementary to each other ? Do the second one assume that the reader has read the first one ? For some one who haven't read either which one would you propose to read up first ?

    Read the article

  • TechEd 2010 Thanks and Demos

    - by Adam Machanic
    Thank you to everyone who attended my three sessions at this year's TechEd show in New Orleans. I had a great time presenting and answering the really great questions posed by attendees. My sessions were: DAT317 T-SQL Power! The OVER Clause: Your Key to No-Sweat Problem Solving Have you ever stared at a convoluted requirement, unsure of where to begin and how to get there with T-SQL? Have you ever spent three days working on a long and complex query, wondering if there might be a better way? Good...(read more)

    Read the article

  • Creating an Entity Data Model using the Model First approach

    - by nikolaosk
    This is going to be the second post of a series of posts regarding Entity Framework and how we can use Entity Framework version 4.0 new features. You can read the first post here . In order to follow along you must have some knowledge of C# and know what an ORM system is and what kind of problems Entity Framework addresses.It will be handy to know how to work inside the Visual Studio 2010 IDE . I have a post regarding ASP.Net and EntityDataSource . You can read it here .I have 3 more posts on Profiling...(read more)

    Read the article

  • changing ext4 journal data mode with remount?

    - by Amos Shapira
    I'm tweaking ext4 file system for speed, one tweak at a time. First tweak is to change from "data=ordered" to "data=writeback". To test this, I execute "mount -n -o remount,data=ordered /" but I keep getting "mount: / not mounted already, or bad option". From lots of google'ing I found many questions about similar problems and one answer circa 2001+ext3 which says that you can't change the journal mode with remount. Is this limitation still current?

    Read the article

  • How to delete Chrome temp data (history, cookies, cache) using command line

    - by Dio Phung
    On Windows 7, I tried running this script but still cannot clear Chrome temp data. Can someone figure out what's wrong with the script? Where do Chrome store history and cache ? Thanks ECHO -------------------------------------- ECHO **** Clearing Chrome cache taskkill /F /IM "chrome.exe">nul 2>&1 set ChromeDataDir=C:\Users\%USERNAME%\AppData\Local\Google\Chrome\User Data\Default set ChromeCache=%ChromeDataDir%\Cache>nul 2>&1 del /q /s /f "%ChromeCache%\*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*Cookies*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*History*.*">nul 2>&1 set ChromeDataDir=C:\Users\%USERNAME%\Local Settings\Application Data\Google\Chrome\User Data\Default set ChromeCache=%ChromeDataDir%\Cache>nul 2>&1 del /q /s /f "%ChromeCache%\*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*Cookies*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*History*.*">nul 2>&1 ECHO **** Clearing Chrome cache DONE

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • Solving Null Entity Problems with JPA Data Controls in PS1

    - by shay.shmeltzer
    Turns out there is a slight bug that seems to prevent you from doing interactions (update, scroll) with the results of a JPA named query that you dropped on a page using ADF Binding. People are running into this when they are doing the EJB tutorial on OTN for example. The problem is that the way the binding is set up for you automatically doesn't allow you to actually access the iterator set of records to do follow up operations. When I last checked this was solved in the next release of JDeveloper, but in the meantime there is a quick simple way to resolve the issue by changing the refresh condition of the oiterator in your page binding. Here is a little demo that shows the problem and the solution:

    Read the article

  • SQL SERVER Force Index Scan on Table Use No Index to Retrieve the Data Query Hint

    Recently I received the following two questions from readers and both the questions have very similar answers.Question 1: I have a unique requirement where I do not want to use any index of the table; how can I achieve this?Question 2: Currently my table uses clustered index and does seek operation; how can I convert [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Implementing Silverlight Coverflow with ADO.NET/WCF Data Services in SharePoint 2010

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MSFT Excel pivot table links to external data

    - by dreftymac
    Question1: What is the best online forum to ask MSFT Excel questions of the following variety? Question2: How can I link an excel pivot-table to a potentially changing source table without having to re-draw the excel pivot-table layout every time the source table changes data. (Note, the columns are not changing, just the data in the rows). Background: I have an excel 2007 pivot table that is grabbing data from another sheet (an excel "table" ... tables are a new feature of excel 2007). When I change the data in the source table, and then go to the pivot table and press "refresh" ... the pivot table reverts to its "blank" format and requires me to re-drag the columns rows and values. What I want is for the pivot-table to simply re-draw itself without me having to re-create the pivot table layout.

    Read the article

  • ubuntu hardrive repartition without uninstalling ubuntu or windows 7 and losing data of hardrive

    - by user141692
    I have and asus r500v with 750 gb gpt system uefi motherboard core i7 3610qm, nvidia geforce gt, with ubuntu and w7 dual boot, I had problems installing ubuntu because of the grub but I fix it with https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/807801, but I still have the problem of "warning: the partition is misaligned by 3072 bytes. this may result iin very poor performance. Repartitioning is suggested" in every linux partitioin I made and my 750 gb is not being used at the maximun capacity it only uses 698 gb. I want to make partitions so that the warning doesnt show up and I can use the maximum capacity of the HDD, as I did with another dual boot laptop (compaq presario cq40). I have the following partitions: unknown 1.0Mb: partition type: lynux Basic DAta partition, device: /dev/sda2 Usage: --, Partition flags: --, partition label:-- warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -system 210 Mb FAt, usage: Filesystem, partition type: EFI system Partition, Partition Flags:--, Label: system, Device: /dev/sda1, partition label: EFI system partition, Capacity 210MB, avilable:--, Mount Point: mounted at /boot/efi -134 Mb NTFS, usage: filesystem, partition type: linux basic data partition, partition flags:.--, device: /dev/sda7, partition label: --, capacity: 134MB,available:--, mount point: not mounted -OS 250 GB NTFS, usage: file system, partititon type: linux basic data partition, partition flags: --, type: NTFS, label: OS, device: /dev/sda3, partition label: basic data partition, capacity: 250 GB, available:-, mount point: not mounted -10GB FAT 32, usage: filesystem, partition type: EFI system partition, partition flags:--, type: FAT 32, label: --, device: /dev/sda4, partition label: --, capacity: 10GB, available:--, mount point: not mounted warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -10gb ext 4, usage: file system, partition type: linux basic data partition, partition flags:--, type: EXT4(version1) label:--, device: /dev/sda9, partition label:--, capacity: 10 GB, available:--, mount point at / warning: the partition is misaligned by 1536 bytes. this may result in very poor performance. repartitioning is suggested. -478GB ext4, usage: filesystem, partition type: linux basic data partition, partition flags:--, type: EXT4, label:--, device: /dev/sda5, partition label:--, capacity: 478gb, available:--, mount point: mounted at /home warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. -2.0gb Swap 2.0Gb, usage: swap space, partition type: linux swap partitioin, partition flags:-, device: /dev/sda6, partition label: capacity: 2.0gb warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. and as you can see it is not well organized so please help me to organize the partitions witahout uninstalling the w7, and if possible the grub2

    Read the article

  • Munin 2 data not showing up on graph

    - by letronje
    I have a fresh installation of Munin 2.0.1 on my Ubuntu 12.04 and the first time I tried to view graphs, it showed them properly(After installation, I had to follow http://munin-monitoring.org/wiki/CgiHowto2 to set it up) After that, the graphs show up, but with with just one data point(single vertical line) as if no data is being collected after I tried it for the first time. In Munin 1.4, there was munin-cron which was run every 5 minutes and I saw new data being plotted in the graph atleast every 5 minutes. But If there is no cron job in v2, How does data collection work with Munin2 ? Is the data collected when the graphs are requested ? The file timestamps in /var/lib/munin have not changed after the first time I tried the graphs. But i do see munin-node process running(restarted in several times). I also see no errors in the munin node log files or apache2 log files. Any idea what could be wrong ? Screenshot : http://i.imgur.com/uzuAK.png Also, is there a way to pre-create graphs instead of doing it dynamically, on the fly ?

    Read the article

  • How to remove empty revisions from existing svn dump file?

    - by Palmin
    I have an svn dump file which includes "empty" revisions (these were created by svnsync when synching only a subdirectory of an existing repository). Since I'd like to use the svnsync'd repository as the new master (no need to sync again), I wanted to get rid of all the empty revisions. Unfortunately, running the dump through svndumpfilter does not seem to remove the empty revisions, probably because svndumpfilter only looks at revisions it cleaned up by itself with the --exclude option (see also here) I was also looking into svndumptool, but it does not seem to provide this functionality. Is it possible to filter these revisions in any other way?

    Read the article

  • Java default Integer value is int

    - by Chris Okyen
    My code looks like this import java.util.Scanner; public class StudentGrades { public static void main(String[] argv) { Scanner keyboard = new Scanner(System.in); byte q1 = keyboard.nextByte() * 10; } } It gives me an error "Type mismatch: cannot convert from int to byte." Why the heck would Java store a literal operand that is small enough to fit in a byte,. into a int type? Do literals get stored in variables/registers before the ALU performs arithmatic operations.

    Read the article

  • SOS, i erased a disk,1T.by mistake

    - by gabriel
    i tried to make bootable a flash drive from the startup disk creator, and i wanted to copy an iso of the 11.10 ubuntu, and when i tried to erase the flash drive i pressed erase on another removable drive 1T storage by mistake, VVVery very very quickly less than one second and with no warning everything was erased i suppose.Is that thing possible?When i tried before theis to erase the flash drive it took around a minute to erase the 8GB.But now less than a second for 1T? Please help, Gabriel

    Read the article

  • The Oracle Retail Week Awards - most exciting awards yet?

    - by sarah.taylor(at)oracle.com
    Last night's annual Oracle Retail Week Awards saw the UK's top retailers come together to celebrate the very best of our industry over the last year.  The Grosvenor House Hotel on Park Lane in London was the setting for an exciting ceremony which this year marked several significant milestones in British - and global - retail.  Check out our videos about the event at our Oracle Retail YouTube channel, and see if you were snapped by our photographer on our Oracle Retail Facebook page. There were some extremely hot contests for many of this year's awards - and all very deserving winners.  The entries have demonstrated beyond doubt that retailers have striven to push their standards up yet again in all areas over the past year.  The judging panel includes some of the most prestigious names in the retail industry - to impress the panel enough to win an award is a substantial achievement.  This year the panel included the likes of Andy Clarke - Chief Executive of ASDA Group; Mark Newton Jones - CEO of Shop Direct Group; Richard Pennycook - the finance director at Morrisons; Rob Templeman - Chief Executive of Debenhams; and Stephen Sunnucks - the president of Gap Europe.  These are retail veterans  who have each helped to shape the British High Street over the last decade.  It was great to chat with many of them in the Oracle VIP area last night.  For me, last night's highlight was honouring both Sir Stuart Rose and Sir Terry Leahy for their contributions to the retail industry.  Both have set the standards in retailing over the last twenty years and taken their respective businesses from strength to strength, demonstrating that there is always a need for innovation even in larger businesses, and that a business has to adapt quickly to new technology in order to stay competitive.  Sir Terry Leahy's retirement this year marks the end of an era of global expansion for the Tesco group and a milestone in the progression of British retail.  Sir Terry has helped steer Tesco through nearly 20 years of change, with 14 years as Chief Executive.  During this time he led the drive for international expansion and an aggressive campaign to increase market share.  He has led the way for High Street retailers in adapting to the rise of internet retailing and nurtured a very successful home delivery service.  More recently he has pioneered the notion of cross-channel retailing with the introduction of Tesco apps for the iPhone and Android mobile phones allowing customers to scan barcodes of items to add to a shopping list which they can then either refer to in store or order for delivery.  John Lewis Partnership was a very deserving winner of The Oracle Retailer of the Year award for their overall dedication to excellent retailing practices.  The business was also named the American Express Marketing/Advertising Campaign of the Year award for their memorable 'Never Knowingly Undersold' advert series, which included a very successful viral video and radio campaign with Fyfe Dangerfield's cover of Billy Joel's 'She's Always a Woman' used for the adverts.  Store Design of the Year was another exciting category with Topshop taking the accolade for its flagship Oxford Street store in London, which combines boutique concession-style stalls with high fashion displays and exclusive collections from leading designers.  The store even has its own hairdressers and food hall, making it a truly all-inclusive fashion retail experience and a global landmark for any self-respecting international fashion shopper. Over the next few weeks we'll be exploring some of the winning entries in more detail here on the blog, so keep an eye out for some unique insights into how the winning retailers have made such remarkable achievements. 

    Read the article

  • Big label generator

    - by jamiet
    Sometimes I write blog posts mainly so that I can find stuff when I need it later. This is such a blog post. Of late I have been writing lots of deployment scripts and I am fan of putting big labels into deployment scripts (which, these days, reside in SSDT) so one can easily see what’s going on as they execute. Here’s such an example from my current project: which results in this being displayed when the script is run: In case you care….PM_EDW is the name of one of our databases. I’m almost embarrassed to admit that I spent about half an hour crafting that and a few others for my current project because a colleague has just alerted me to a website that would have done it for me, and given me lots of options for how to present it too: http://www.patorjk.com/software/taag/#p=testall&f=Banner3&t=PM__EDW Very useful indeed. Nice one! And yes, I’m sure there are a myriad of sites that do the same thing - I’m a latecomer, ok? @Jamiet

    Read the article

  • First steps into css - aligning data insite one DIV [on hold]

    - by Andrew
    I am trying to move away from tables, and start doing CSS. Here is my HTML code that I currently trying to place into a nice looking container. <div> <div> <h2>ID: 4000 | SSN#: 4545</h2> </div> <div> <img src="./images/tenant/unknown.png"> </div> <div> <h3>Names Used</h3> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> </div> <div> <h3>Phones Used</h3> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> </div> <div> <h3>Addresses Used</h3> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> </div> </div> I now understand now I create classes and assign classes to elements. I have no issues doing colors. But I am very confused with elements alignments. Could you suggest a nice way to pack it together with some CSS which I can analyze and take as a CSS starting learning point?

    Read the article

  • New Year's Resolution: Highest Availability at the Lowest Cost

    - by margaret.hamburger(at)oracle.com
    Don't miss this Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Event Date: 01/11/2011 10:00 AM Pacific Standard Time You'll learn how Oracle's Maximum Availability Architecture and Oracle Database 11g help you: Achieve the highest availability at the lowest cost Protect your systems from unplanned downtime Eliminate idle redundancy Register Now! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • insert null character using tamper data

    - by Jeremy Comulu
    Using the firefox extension tamper data (for modifing http requests that firefox makes) how do I insert a null character into a post field? I can enter normal characters, but binary characters in it are not urlencoded and are shown as is, so how do I enter the null character into a field? If you know of a firefox extension like tamper data that I can do this or a way to do this using tamper data please post.

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • Python script won't write data when ran from cron

    - by Ruud
    When I run a python script in a terminal it runs as expected; downloads file and saves it in the desired spot. sudo python script.py I've added the python script to the root crontab, but then it runs as it is supposed to except it does not write the file. $ sudo crontab -l > * * * * * python /home/test/script.py >> /var/log/test.log 2>&1 Below is a simplified script that still has the problem: #!/usr/bin/python scheduleUrl = 'http://test.com/schedule.xml' schedule = '/var/test/schedule.xml' # Download url and save as filename def wget(url, filename): import urllib2 try: response = urllib2.urlopen(url) except Exception: import traceback logging.exception('generic exception: ' + traceback.format_exc()) else: print('writing:'+filename+';') output = open(filename,'wb') output.write(response.read()) output.close() # Download the schedule wget(scheduleUrl, schedule) I do get the message "writing:name of file;" inside the log, to which the cron entry outputs. But the actual file is nowhere to be found... The dir /var/test is chmodded to 777 and using whatever user, I am allowed to add and change files as I please.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >