Search Results

Search found 24117 results on 965 pages for 're write'.

Page 4/965 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • How can I disable write protection in my USB flash drive?

    - by 97847658
    My USB flash drive is currently unusable because it somehow (quite suddenly!) became write protected. I have googled around and tried many solutions to this problem, but none of them have worked so far. Here are some of the solutions I've tried: The drive has no tangible switch or button. Formatting the drive won't work, even in command line, even "low level formatting", because the drive is (after all) write protected. Changing certain registry keys to 0 doesn't seem to work. Repair_Neo2.9.exe says "USB Flash Disk not found!" One factor that may make it more difficult to find a solution: I have no idea what the make or model is, because I received the USB flash drive from my university as a gift. So if anyone knows how to find the make and model, that alone might be helpful. Any ideas? Thanks.

    Read the article

  • How do I remove a USB drive's write protection?

    - by nate
    I have a SanDisk Cruser Blade USB stick that suddenly seems to be write protected. I tried running DiskPart but after I write the command "attributes disk clear readonly" it displays this: Microsoft DiskPart version 5.1.3565 ADD - Add a mirror to a simple volume. ACTIVE - Marks the current basic partition as an active boot partition. ASSIGN - Assign a drive letter or mount point to the selected volume. BREAK - Break a mirror set. CLEAN - Clear the configuration information, or all information, off the disk. CONVERT - Converts between different disk formats. CREATE - Create a volume or partition. DELETE - Delete an object. DETAIL - Provide details about an object. EXIT - Exit DiskPart EXTEND - Extend a volume. HELP - Prints a list of commands. IMPORT - Imports a disk group. LIST - Prints out a list of objects. INACTIVE - Marks the current basic partition as an inactive partition. ONLINE - Online a disk that is currently marked as offline. REM - Does nothing. Used to comment scripts. REMOVE - Remove a drive letter or mount point assignment. REPAIR - Repair a RAID-5 volume. RESCAN - Rescan the computer looking for disks and volumes. RETAIN - Place a retainer partition under a simple volume. SELECT - Move the focus to an object. It's like when you type help at the DiskPart prompt, so how do I get past this? This problem started when I plugged the stick into a laptop which had viruses, if that's any help.

    Read the article

  • Samba smb.conf read only and read/write accounts

    - by Pieter
    Below you can see my smb.conf, pieter is my admin user read/write on the shares works good with that account. Then I have a leecher account that has been added with smbpasswd -a leecher to the smb users, it is set up so this user only has read access to the shares. This works on MegaSam and on Thumbnails but not on my other drives, leecher does not get any access on the other shares. [global] security = user [MegaSam] comment = MegaSam path = /media/MegaSam browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [SilentBob] comment = SilentBob path = /media/SilentBob browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [Thumbnails] comment = Thumbnails path = /media/Thumbnails browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [Downloads] comment = Downloads path = /media/Downloads browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755

    Read the article

  • How to mount a HFS partition in Ubuntu as Read/Write?

    - by GiH
    I plugged in my external harddrive (which was formatted on my Mac into HFS+ journaled) to my Ubuntu desktop 9.04 64bit. I am not able to get the drive to mount with write capability, how do I do that? Right now all I'm getting is read access, I tried sudo mount -t hfsplus /dev/sdf2 /media/"Portable HD" but that still gave me only read access... ideas??

    Read the article

  • Can't write to NTFS formatted drives

    - by mloman
    I'm not sure what has happened, but I've all of a sudden lost write access to any of my NTFS external drives. I installed a few games and apps from the software center, and now I can't make new folders or copy and paste files to anything that is NTFS. Everything is now read only, and I've tried so many things to fix it, but it seems hopeless. Just to check if it wasn't the drives themselves, I made a little ntfs formatted truecrypt volume, and a fat formatted volume. And yes, it seems that Ubuntu is blocking me from writing anything to NTFS. What happened here? Whats a way I can simply get write access to my NTFS drives, so I can just backup all my stuff. I'll probably reinstall Ubuntu. Please help. UPDATE (and thanks everyone for their quick replies) The problem has been solved. Prior to noticing that I had lost NTFS write permission, I had installed GParted from the software center, and there was an extension called ntfsprogs that came with it. During my search for a solution to the problem, I uninstalled GParted (as that was one of the apps I installed just before the problem). But that did not solve the problem. I came across an app called 'NTFS Configuration Tool'. When I installed this, it said that the ntfsprogs extension needed to be removed (so I guess uninstalling GPARTED, didn't remove the ntfsprog extension). I launched the NTFS Configuration Tool and now I have write access to NTFS drives. Unfortunately, I didn't check if I had write permission prior to launching the NTFS Configuration Tool, so I'm not sure whether the NTFS Configuration Tool, or the un-installation of ntfsprog gave me back NTFS write permission. Hopefully if another newbee encounters this problem, they'll come across this page and know what to do.

    Read the article

  • Very slow write access to SSD disks on some Asus P8Z77 motherboards

    - by lenik
    I have Asus P8Z77-V LK motherboard, that ran Mint 13 (based on Ubuntu 12.04) just perfectly, but recently I've tried to install Mint 17 and noticed abysmal write performance. Write speed on SSD disk was about 1.5MB/sec, when it's supposed to be in 150-250MB/sec range. For write testing I've used dd if=/dev/zero of=/dev/sda bs=10M count=10 while booted up from LiveCD. I have also tested the read speed with hdparm -tT /dev/sda and got about 440MB/sec -- that's normal. I can tell, the read performance has not degraded at all and is not an issue here. Since I had a few different SSD disks and few motherboards, I've tested and tested and here are results: Asus P8H77 works fine with Mint13, has very slow write speed starting from Mint14. Asus P8Z77-V LK works with Mint13, has very slow write speed starting from Mint14. Asus P8Z77-V PRO works with Mint13, and works just fine with Mint14, 15, 16 and 17. The only difference between "PRO" version and others is that it has extra SATA controller onboard (in addition to the Z77 chipset SATA controller) providing extra 2 SATA ports. SSD disks work fine with "PRO" version when connected to the native SATA ports as well as to the ports provided by extra SATA controller, so this does not look like a hardware issue. As far as I can tell, there's something changed in the kernel while going from 3.2 to 3.5, that affects the detection of onboard SATA controller for Asus P8*77 motherboards, that screws up the write speed for SSD drives. Could anyone shed some light on how to fix this issue or, possibly, give a pointer to a more suitable place to ask this question?

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Php efficiency question --> Database call vs. File Write vs. Calling C++ executable

    - by JP19
    Hi, What I wish to achieve is - log all information about each and every visit to every page ofmy website (like ip address, browser, referring page, etc). Now this is easy to do. What I am interested is doing this in a way so as to cause minimum overhead (runtime) in the php scripts. What is the best approach for this efficiency-wise: 1) Log all information to a database table 2) Write to a file (from php directly) 3) Call a C++ executable, that will write this info to a file in parallel [so the script can continue execution without waiting for the file write to occur ...... is this even possible] I may be trying to optimize unnecessarily/prematurely, but still - any thoughts / ideas on this would be appreciated. (I think efficiency of file write/logging can really be a concern if I have say 100 visits per minute...) Thanks & Regards, JP

    Read the article

  • How to write Bash scribt to open two different terminals

    - by Ahmed Zain El Dein
    How to write Bash script to open two different taped terminal ,and write in both of them commands separately to be executed unrelationally for instance : Terminal number one open skype terminal number two open in the end , i want one more thing , can i write in the bash script my skype username and password to put them in skype when open it in terminal one automatically then login too Thanks

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • What is the best way to lazy load doubleclick ads that use document.write?

    - by user560585
    Ads requested through via doubleclick often get served from an ad provider network that returns javascript that in turn performs document.write to place ads in the page. The use of document.write requires that the document be open, implying that the page hasn't reached document.complete. This gets in the way of deferring or lazy loading ad content. Putting such code at page bottom is helpful but doesn't do enough to lower all-important "page-loaded" time. Are "friendly iframes" the best we have? Is there any other alternative such as a clever way to override document.write?

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • Write, Read and Update Oracle CLOBs with PL/SQL

    - by robertphyatt
    Fun with CLOBS! If you are using Oracle, if you have to deal with text that is over 4000 bytes, you will probably find yourself dealing with CLOBs, which can go up to 4GB. They are pretty tricky, and it took me a long time to figure out these lessons learned. I hope they will help some down-trodden developer out there somehow. Here is my original code, which worked great on my Oracle Express Edition: (for all examples, the first one writes a new CLOB, the next one Updates an existing CLOB and the final one reads a CLOB back) CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(UTL_RAW.CAST_TO_RAW(p_document)), 1, UTL_RAW.CAST_TO_RAW(p_document)); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS    lob_loc  CLOB; BEGIN    SELECT CLOBHOLDERDDOC INTO lob_loc    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id;    p_clob := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1)); END; / As you can see, I had originally been casting everything back and forth between RAW formats using the UTL_RAW.CAST_TO_VARCHAR2() and UTL_RAW.CAST_TO_RAW() functions all over the place, but it had the nasty side effect of working great on my Oracle express edition on my developer box, but having all the CLOBs above a certain size display garbage when read back on the Oracle test database server . So...I kept working at it and came up with the following, which ALSO worked on my Oracle Express Edition on my developer box:   CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (     p_document      IN VARCHAR2,     p_id        OUT NUMBER) IS       lob_loc CLOB; BEGIN     INSERT INTO TBL_CLOBHOLDERDOC (CLOBHOLDERDOC)         VALUES (empty_CLOB())         RETURNING CLOBHOLDERDOC, CLOBHOLDERDOCID INTO lob_loc, p_id;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document);   END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (     p_document      IN VARCHAR2,     p_id        IN NUMBER) IS       lob_loc CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc FROM TBL_CLOBHOLDERDOC     WHERE CLOBHOLDERDOCID = p_id FOR UPDATE;     DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (     p_id IN NUMBER,     p_clob OUT VARCHAR2) IS     lob_loc  CLOB; BEGIN     SELECT CLOBHOLDERDOC INTO lob_loc     FROM   TBL_CLOBHOLDERDOC     WHERE  CLOBHOLDERDOCID = p_id;     p_clob := DBMS_LOB.SUBSTR(lob_loc, DBMS_LOB.GETLENGTH(lob_loc), 1); END; / Unfortunately, by changing my code to what you see above, even though it kept working on my Oracle express edition, everything over a certain size just started truncating after about 7950 characters on the test server! Here is what I came up with in the end, which is actually the simplest solution and this time worked on both my express edition and on the database server (note that only the read function was changed to fix the truncation issue, and that I had Oracle worry about converting the CLOB into a VARCHAR2 internally): CREATE OR REPLACE PROCEDURE PRC_WR_CLOB (        p_document      IN VARCHAR2,        p_id            OUT NUMBER) IS      lob_loc CLOB; BEGIN    INSERT INTO TBL_CLOBHOLDERDDOC (CLOBHOLDERDDOC)        VALUES (empty_CLOB())        RETURNING CLOBHOLDERDDOC, CLOBHOLDERDDOCID INTO lob_loc, p_id;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_UD_CLOB (        p_document      IN VARCHAR2,        p_id            IN NUMBER) IS      lob_loc CLOB; BEGIN        SELECT CLOBHOLDERDDOC INTO lob_loc FROM TBL_CLOBHOLDERDDOC        WHERE CLOBHOLDERDDOCID = p_id FOR UPDATE;    DBMS_LOB.WRITE(lob_loc, LENGTH(p_document), 1, p_document); END; / CREATE OR REPLACE PROCEDURE PRC_RD_CLOB (    p_id IN NUMBER,    p_clob OUT VARCHAR2) IS BEGIN    SELECT CLOBHOLDERDDOC INTO p_clob    FROM   TBL_CLOBHOLDERDDOC    WHERE  CLOBHOLDERDDOCID = p_id; END; /   I hope that is useful to someone!

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • How to write PowerShell code part 4 (using loop)

    - by ybbest
    In this post, I’d like to show you how to loop through the xml element. I will use the list data deletion script as an example. You can download the script here. 1. To perform the loop, I use foreach in powershell. Here is my xml looks like <?xml version="1.0" encoding="utf-8"?> <Site Url="http://workflowuat/npdmoc"> <Lists> <List Name="YBBEST Collaboration Areas" Type="Document Library"/> <List Name="YBBEST Project" /> <List Name="YBBEST Document"/> </Lists> </Site> 2. Here is the PowerShell to manipulate the xml. Note, you need to get to the $configurationXml.Site.Lists.List variable rather than $configurationXml.Site.Lists foreach ($list in $configurationXml.Site.Lists.List){ AppendLog "Clearing data for $($list.Name) at site $weburl" Yellow if($list.Type -eq "Document Library"){ deleteItemsFromDocumentLibrary -Url $weburl -ListName $list.Name }else{ deleteItemsFromList -Url $weburl -ListName $list.Name } AppendLog "Data in $($list.Name) at $weburl is cleared" Green } How to write PowerShell code part 1 How to write PowerShell code part 2 How to write PowerShell code part 3 How to write PowerShell code part 4

    Read the article

  • Seeking an C/C++ OBJ geometry read/write that does not modify the representation

    - by Blake Senftner
    I am seeking a means to read and write OBJ geometry files with logic that does not modify the geometry representation. i.e. read geometry, immediately write it, and a diff of the source OBJ and the one just written will be identical. Every OBJ writing utility I've been able to find online fails this test. I am writing small command line tools to modify my OBJ geometries, and I need to write my results, not just read the geometry for rendering purposes. Simply needing to write the geometry knocks out 95% of the OBJ libraries on the web. Also, many of the popular libraries modify the geometry representation. For example, Nat Robbin's GLUT library includes the GLM library, which both converts quads to triangles, as well as reverses the topology (face ordering) of the geometry. It's still the same geometry, but if your tool chain expects a given topology, such as for rigging or morph targets, then GLM is useless. I'm not rendering in these tools, so dependencies like OpenGL or GLUT make no sense. And god forbid, do not "optimize" the geometry! Redundant vertices are on purpose for maintaining oneself on cache with our weird little low memory mobile devices.

    Read the article

  • How to parse a text file and write the result to an xls file?

    - by Bk
    Hi all i am a junior level SQL developer. I have a situation where I have a text file with 1100 lines of a search result with each line containing a file path and a stored procedure associated with that file. Each line has a structure like the one below: abc\def\ghi\***.cs(40): jkl=******.*****.******, "proc_pqrst", parms); Where abc\def\ghi\***.cs is file path of the file ***.cs. The stored procedure names begin with proc_. I have to extract the ***.cs and the corresponding stored procedure name begining with proc_ and write them to a .xls file. Can some body help me in writing the parsing program to do this? Also can I get a detailed outline on where should I write the code for c# and where should I compile it? This could be a great help as I don't have any knowledge of C#. Thank you, BK.

    Read the article

  • How do I give PHP write access to a directory?

    - by SGWebsNow
    I'm trying to use PHP to create a file, but it isn't working. I am assuming this is because it doesn't have write access (it's always been the problem before). I tried to test if this was the problem by making the folder chmod 0777, but that just ended up making every script in that directory return a 500 error message until I changed it back. How do I give PHP write access to my file system so it can a create a file? Edit: It is hosted on Hostgator shared hosting using Apache.

    Read the article

  • Best directory to store application data with read\write rights for all users?

    - by Wodzu
    Hi guys. Until Windows Vista I was saving my application data into the directory where the program was located. The most common place was "C:\Program Files\MyApplication". As we know, under Vista and later the common user does't have rights to write under "Program Files" folder. So my first idea was to save the application data under "All Useres\Application Data" folder. But it seams that this folder has writing restrictions too! So to sum up, my requirements are: Folder should exist under Windows XP and above Microsoft's systems. All useres of the system should read\write\creation rights to this folder and it subfolders and files. I want to have only one copy of file\files for all useres. Thanks for your time.

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >