Search Results

Search found 39577 results on 1584 pages for 'temp files'.

Page 583/1584 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • shell scripting: search/replace & check file exist

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN -o`; isdone=`ls *.txt | grep $PATTERN -o`; whatsleft=todo - isdone; # what's the unix magic? #tack on the .xml prefix with sed or something #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep? To clarify: so the simplest way to do what i'm asking is (in pseudocode) for i in `/bin/ls *.xml` do replace xml suffix with txt if [that file exists] add to whatsleft list end done

    Read the article

  • Bloated PDF created by TCPDF

    - by Yogi Yang 007
    In a web app developed in PHP we are generating Quotations and Invoices (which are very simple and of single page) using TCPDF lib. The lib is working just great but it seems to generate very large PDF files. For example in our case it is generating PDF files as large as 4 MB (+/- a few KB). How to reduce this bloating of PDF files generated by TCPDF? Here is code snippet that I am using ob_start(); include('quote_view_bag_pdf.php'); //This file is valid HTML file with PHP code to insert data from DB $quote = ob_get_contents(); //Capture the content of 'quote_view_bag_pdf.php' file and store in variable ob_end_clean(); //Code to generate PDF file for this Quote //This line is to fix a few errors in tcpdf $k_path_url=''; require_once('tcpdf/config/lang/eng.php'); require_once('tcpdf/tcpdf.php'); // create new PDF document $pdf = new TCPDF(); // remove default header/footer $pdf->setPrintHeader(false); $pdf->setPrintFooter(false); // add a page $pdf->AddPage(); // print html formated text $pdf->writeHtml($quote, true, 0, true, 0); //Insert Variables contents here. //Build Out File Name $pdf_out_file = "pdf/Quote_".$_POST['quote_id']."_.pdf"; //Close and output PDF document $pdf->Output($pdf_out_file, 'F'); $pdf->Output($pdf_out_file, 'I'); /////////////// enter code here Hope this code fragment will give some idea?

    Read the article

  • How to sort the file names in bash in this circumstance?

    - by Nicolas
    I have run a program to generate some results with the different parameters(i.e. the R, C and RP). These results are saved in files named results.txt. Then, I should parse these experimental results to make an analysis. In the params_R_7_C_16_RP_0, the 7 is the value of the parameter R, the 16 is the value of the parameter C and the 0 is the value of the parameter RP. Now, I want to get these results.txt files in the current directory to parse, and sort the path with the parameter values of R,C and RP. I first use the following command to get the results.txt files that I want to parse: find ./ -name "results.txt" and the output is: ./params_R_11_C_9_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_11_C_16_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_11_C_4_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_15_C_4_RP_0/results.txt ./params_R_5_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt and I change the command as follows: find ./ -name "results.txt" | sort and the output is: ./params_R_11_C_16_RP_0/results.txt ./params_R_11_C_25_RP_0/results.txt ./params_R_11_C_4_RP_0/results.txt ./params_R_11_C_9_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_5_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt But I want it output as following: ./params_R_5_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ... I should let it params_R_005_C_004_RP_0 when generating the results. But it would take much time to rerun the program to get the results. So I wonder if there is any way to use the bash command to achieve this objective.

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • Changing customErrors in web.config semi-dynamically

    - by Tom Ritter
    The basic idea is we have a test enviroment which mimics Production so customErrors="RemoteOnly". We just built a test harness that runs against the Test enviroment and detects breaks. We would like it to be able to pull back the detailed error. But we don't want to turn customErrors="On" because then it doesn't mimic Production. I've looked around and thought a lot, and everything I've come up with isn't possible. Am I wrong about any of these points? We can't turn customErrors on at runtime because when you call configuration.Save() - it writes the web.config to disk and now it's Off for every request. We can't symlink the files into a new top level directory with it's own web.config because we're on windows and subversion on windows doesn't do symlinks. We can't use URL-Mapping to make an empty folder dir2 with its own web.config and make the files in dir1 appear to be in dir2 - the web.config doesn't apply We can't copy all the aspx files into dir2 with it's own web.config because none of the links would be consistent and it's a horrible hacky solution. We can't change customErrors in web.config based on hostname (e.g. add another dns entry to the test server) because it's not possible/supported We can't do any virtual directory shenanigans to make it work. If I'm not, is there a way to accomplish what I'm trying to do? Turn on customErrors site-wide under certain circumstances (dns name or even a querystring value)?

    Read the article

  • MS Source Server - source stream is apparently not there when viewing with srctool

    - by Tim Peel
    Hi, I have been playing around with the MS Source Server stuff in the MS Debugging Tools install. At present, I am running my code/pdbs through the Subversion indexing command, which is now running as expected. It creates the stream for a given pdb file and writes it to the pdb file. However when I use that DLL and associated pdb in visual studio 2008, it says the source code cannot be retrieved. If I check the pdb against srctool is says none of the source files contained are indexed, which is very strange as the process prior ran fine. If I check the stream that was generated from the svnindex.cmd run for the pdb, srctool says all source files are indexed. Why would there be a difference? I have opened the pdb file in a text editor and I can see the original references to the source files on my machine (also under the srcsrv header name) and the new "injected" source server links to my subversion repository). Should both references still exist in the pdb? I would have expected one to be removed? Either way, visual studio 2008 will not pick up my source references so I am a bit lost as to what to try next. As far as I can tell, I have done everything I should have. Anyone have similar experiences? Many thanks.

    Read the article

  • Need MySQL RLIKE expression to exclude certain strings ending in particular characters...

    - by user299508
    So I've been working with RLIKE to pull some data in a new application and mostly enjoying it. To date I've been using RLIKE queries to return 3 types of results (files, directories and everything). The queries (and example results) follow: **All**: SELECT * FROM public WHERE obj_owner_id='test' AND obj_namespace RLIKE '^user/test/public/[-0-9a-z_./]+$' ORDER BY obj_namespace user/test/public/a-test/.comment user/test/public/a-test/.delete user/test/public/directory/ user/test/public/directory/image.jpg user/test/public/index user/test/public/site-rip user/test/public/site-rip2 user/test/public/test-a user/test/public/widget-test **Files**: SELECT * FROM public WHERE obj_owner_id='test' AND obj_namespace RLIKE '^user/test/public/[-0-9a-z_./]+[-0-9a-z_.]+$' ORDER BY obj_namespace user/test/public/a-test/.comment user/test/public/a-test/.delete user/test/public/directory/image.jpg user/test/public/index user/test/public/site-rip user/test/public/site-rip2 user/test/public/test-a user/test/public/widget-test **Directories**: SELECT * FROM public WHERE obj_owner_id='test' AND obj_namespace RLIKE '^user/test/public/[-0-9a-z_./]+/$' ORDER BY obj_namespace user/test/public/directory/ This works well for the above 3 basic scenarios but under certain situations I'll be including special 'suffixes' I'd like to be excluded from the results of queries (without having to resort to PHP functions to do it). A good example of such a string would be: user/test/public/a-test/.delete That data (there are more rows then just obj_namespace) is considered deleted and in the Files and All type queries I'd like it to be omitted within the expression if possible. Same goes for the /.comments and all such meta data will always be in the same format: /.[sometext] I'd hoped to use this feature extensively throughout my application, so I'm hoping there might be a very simple answer. (crosses fingers) Anyway, thanks as always for any/all responses and feedback.

    Read the article

  • java heap allocation

    - by gurupriyan.e
    I tried to increase the heap size like the below C:\Data\Guru\Code\Got\adminservice\adminservice>java -Xms512m -Xmx512m Usage: java [-options] class [args...] (to execute a class) or java [-options] -jar jarfile [args...] (to execute a jar file) where options include: -client to select the "client" VM -server to select the "server" VM -hotspot is a synonym for the "client" VM [deprecated] The default VM is client. -cp <class search path of directories and zip/jar files> -classpath <class search path of directories and zip/jar files> A ; separated list of directories, JAR archives, and ZIP archives to search for class files. -D<name>=<value> set a system property -verbose[:class|gc|jni] enable verbose output -version print product version and exit -version:<value> require the specified version to run -showversion print product version and continue -jre-restrict-search | -jre-no-restrict-search include/exclude user private JREs in the version search -? -help print this help message -X print help on non-standard options -ea[:<packagename>...|:<classname>] -enableassertions[:<packagename>...|:<classname>] enable assertions -da[:<packagename>...|:<classname>] -disableassertions[:<packagename>...|:<classname>] disable assertions -esa | -enablesystemassertions enable system assertions -dsa | -disablesystemassertions disable system assertions -agentlib:<libname>[=<options>] load native agent library <libname>, e.g. -agentlib:hprof see also, -agentlib:jdwp=help and -agentlib:hprof=help -agentpath:<pathname>[=<options>] load native agent library by full pathname -javaagent:<jarpath>[=<options>] load Java programming language agent, see java.lang.instrument It gave the help message as above - Does it mean that it was allocated?

    Read the article

  • Why doesn't Perl file glob() work outside of a loop in scalar context?

    - by Rob
    According to the Perl documentation on file globbing, the <*> operator or glob() function, when used in a scalar context, should iterate through the list of files matching the specified pattern, returning the next file name each time it is called or undef when there are no more files. But, the iterating process only seems to work from within a loop. If it isn't in a loop, then it seems to start over immediately before all values have been read. From the Perl docs: In scalar context, glob iterates through such filename expansions, returning undef when the list is exhausted. http://perldoc.perl.org/functions/glob.html However, in scalar context the operator returns the next value each time it's called, or undef when the list has run out. http://perldoc.perl.org/perlop.html#I/O-Operators Example code: use warnings; use strict; my $filename; # in scalar context, <*> should return the next file name # each time it is called or undef when the list has run out $filename = <*>; print "$filename\n"; $filename = <*>; # doesn't work as documented, starts over and print "$filename\n"; # always returns the same file name $filename = <*>; print "$filename\n"; print "\n"; print "$filename\n" while $filename = <*>; # works in a loop, returns next file # each time it is called In a directory with 3 files...file1.txt, file2.txt, and file3.txt, the above code will output: file1.txt file1.txt file1.txt file1.txt file2.txt file3.txt Note: The actual perl script should be outside the test directory, or you will see the file name of the script in the output as well. Am I doing something wrong here, or is this how it is supposed to work?

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Error while splitting application context file in spring

    - by Krupal
    I am trying to split the ApplicationContext file in Spring. For ex. the file is testproject-servlet.xml having all the entries. Now I want to split this single file into multiple files according to logical groups like : group1-services.xml, group2-services.xml I have created following entries in web.xml : <servlet> <servlet-name>testproject</servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/group1-services.xml, /WEB-INF/group2-services.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> I am using SimpleUrlHandlerMapping as: RegisterController PayrollServicesController I also have the controller defined as : .. .. The problem is that I have splitted the ApplicationContext file "testproject-servlet.xml" into two different files and I have kept the above entries in "group1-services.xml". Is it fine? I want to group things logically based on their use in seperate .xml files. But I am getting the following error when I try to access a page inside the application : org.springframework.web.servlet.DispatcherServlet noHandlerFound WARNING: No mapping for [/TestProject/payroll_services.htm] in DispatcherServlet with name 'testproject' Please tell me how to resolve it. Thanks in Advance !

    Read the article

  • Installing applications OTA

    - by Ed Marty
    I have a system set up to download jad files on users' Blackberries, but it only works intermittently, and seemingly randomly. If the user clicks on the link within their BlackBerry browser, 95% of the time on the first try an error message will pop up saying there was an HTTP 500 error (which our server never returns). Viewing the details of this message within the blackberry browser, it says nothing but java.lang.nullpointerexception which, again, could not have come from our server (running apache/php). However, if the user clicks on the link a few more times, or navigates away and goes back to that page, it suddenly works. No change on the server, it just shows the application install screen. Unfortunately, this doesn't always work; sometimes the error 500 just keeps showing up. The link is rather long (containing an sha hash as a token as part of the URL), but I would think that a long URL would either always be broken or always work, not work intermittently. The link uses a php script to download the jad and cod files. Linking to the files directly rather than using the script seems to work more often (I haven't determined if that also ever has an error 500 or not), but I can't find any issues with the headers. The content type is set correctly and, like I said, if the headers were an issue, I'd think it would either always work or always break. Any clues?

    Read the article

  • Industry-style practices for increasing productivity in a small scientific environment

    - by drachenfels
    Hi, I work in a small, independent scientific lab in a university in the United States, and it has come to my notice that, compared with a lot of practices that are ostensibly followed in the industry, like daily checkout into a version control system, use of a single IDE/editor for all languages (like emacs), etc, we follow rather shoddy programming practices. So, I was thinking of getting together all my programs, scripts, etc, and building a streamlined environment to increase productivity. I'd like suggestions from people on Stack Overflow for the same. Here is my primary plan.: I use MATLAB, C and Python scripts, and I'd like to edit, compile them from a single editor, and ensure correct version control. (questions/things for which I'd like suggestions are in italics) 1] Install Cygwin, and get it to work well with Windows so I can use git or a similar version control system (is there a DVCS which can work directly from the windows CLI, so I can skip the Cygwin step?). 2] Set up emacs to work with C, Python, and MATLAB files, so I can edit and compile all three at once from a single editor (say, emacs) (I'm not very familiar with the emacs menu, but is there a way to set the path to the compiler for certain languages? I know I can Google this, but emacs documentation has proved very hard for me to read so far, so I'd appreciate it if someone told me in simple language) 3] Start checking in code at the end of each day or half-day so as to maintain a proper path of progress of my code (two questions), can you checkout files directly from emacs? is there a way to checkout LabVIEW files into a DVCS like git? Lastly, I'd like to apologize for the rather vague nature of the question, and hope I shall learn to ask better questions over time. I'd appreciate it if people gave their suggestions, though, and point to any resources which may help me learn.

    Read the article

  • git stash blunder:

    - by Chirag Patel
    I did a git stash pop and ended up with merge conflicts. I removed the files from the file system and did a git checkout as shown below, but it thinks the files are still unmerged. I then tried replacing the files and doing a git checkout again and same result. I event tried forcing it with -f flag. Any help would be appreciated! chirag-patels-macbook-pro:haloror patelc75$ git status app/views/layouts/_choose_patient.html.erb: needs merge app/views/layouts/_links.html.erb: needs merge # On branch prod-temp # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: db/schema.rb # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # unmerged: app/views/layouts/_choose_patient.html.erb # unmerged: app/views/layouts/_links.html.erb chirag-patels-macbook-pro:haloror patelc75$ git checkout app/views/layouts/_choose_patient.html.erb error: path 'app/views/layouts/_choose_patient.html.erb' is unmerged chirag-patels-macbook-pro:haloror patelc75$ git checkout -f app/views/layouts/_choose_patient.html.erb warning: path 'app/views/layouts/_choose_patient.html.erb' is unmerged

    Read the article

  • php $_FILES error code returning an unexpected error

    - by EmmyS
    I have some code that allows users to upload multiple files at once. It was never getting to a specific point after the upload, so I put in an echo to test for the value of the error code, and it's returning a value that I'm not sure I understand. Here's the code: $tmpTarget = PCBUG_UPLOADPATH; foreach ($_FILES["attachments"]["error"] as $key => $error) { if ($error == UPLOAD_ERR_OK) { $tmp_name = $_FILES["attachments"]["tmp_name"][$key]; $name = str_replace(" ", "_", $_FILES["attachments"]["name"][$key]); move_uploaded_file($tmp_name, "$tmpTarget/$name"); @unlink($_FILES["attachments"]["tmp_name"][$key]); } else { $errorFlag = true; echo "error = $error"; exit; } } The code that creates the attachments field looks like this: for($i=1; $i<=$max_no_img; $i++){ echo "<input type=file name='attachments[]' class='bginput'><br />"; } where $max_no_img is a variable set further up in the code, and PCBUG_UPLOAD path is a constant defined in an included file. Here's what's confusing: after I submit my form, I go and look in my uploads directory, and the files I've selected through the form are there - they uploaded correctly. However, the code is jumping into the else clause and $error is returning 4, which the php manual indicates means that no file was uploaded. Any ideas? The files very clearly are getting where they're supposed to. Is there some other definition of "uploaded" that isn't happening?

    Read the article

  • Python MD5 Hash Faster Calculation

    - by balgan
    Hi everyone. I will try my best to explain my problem and my line of thought on how I think I can solve it. I use this code for root, dirs, files in os.walk(downloaddir): for infile in files: f = open(os.path.join(root,infile),'rb') filehash = hashlib.md5() while True: data = f.read(10240) if len(data) == 0: break filehash.update(data) print "FILENAME: " , infile print "FILE HASH: " , filehash.hexdigest() and using start = time.time() elapsed = time.time() - start I measure how long it takes to calculate an hash. Pointing my code to a file with 653megs this is the result: root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.624 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.373 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.540 Ok now 12 seconds +- on a 653mb file, my problem is I intend to use this code on a program that will run through multiple files, some of them might be 4/5/6Gb and it will take wayy longer to calculate. What am wondering is if there is a faster way for me to calculate the hash of the file? Maybe by doing some multithreading? I used a another script to check the use of the CPU second by second and I see that my code is only using 1 out of my 2 CPUs and only at 25% max, any way I can change this? Thank you all in advance for the given help.

    Read the article

  • How Can I Find Out *HOW* My Site Was Hacked? How Do I Find Site Vulnerabilities?

    - by Imageree
    One of my custom developed ASP.NET sites was hacked today: "Hacked By Swan (Please Stop Wars !.. )" It is using ASP.NET and SQL Server 2005 and IIS 6.0 and Windows 2003 server. I am not using Ajax and I think I am using stored procedures everywhere I am connecting to the database so I dont think it is SQL injection. I have now removed the write permission on the folders. How can I find out what they did to hack the site and what to do to prevent it from happening again? The server is up to date with all Windows updates. What they have done is uploading 6 files (index.asp, index.html, index.htm,...) to the main directory for the website. What log files should I upload? I have log files for IIS from this folder: c:\winnt\system32\LogFiles\W3SVC1. I am willing to show it to some of you but don't think it is good to post on the Internet. Anyone willing to take a look at it? I have already searched on Google but the only thing I find there are other sites that have been hacked - I haven't been able to see any discussion about it. I know this is not strictly related to programming but this is still an important thing for programmers and a lot of programmers have been hacked like this.

    Read the article

  • Copy folders when copying list items from source to destination

    - by iHeartDucks
    Hi, This is my code to copy files in a list from source to destination. Using the code below I am only able to copy files but not folders. Any ideas on how can I copy the folders and the files within those folders? using (SPSite objSite = new SPSite(URL)) { using (SPWeb objWeb = objSite.OpenWeb()) { SPList objSourceList = null; SPList objDestinationList = null; try { objSourceList = objWeb.Lists["Source"]; } catch(Exception ex) { Console.WriteLine("Error opening source list"); Console.WriteLine(ex.Message); } try { objDestinationList = objWeb.Lists["Destination"]; } catch (Exception ex) { Console.WriteLine("Error opening destination list"); Console.WriteLine(ex.Message); } string ItemURL = string.Empty; if (objSourceList != null && objDestinationList != null) { foreach (SPListItem objSourceItem in objSourceList.Items) { ItemURL = string.Format(@"{0}/Destination/{1}", objDestinationList.ParentWeb.Url, objSourceItem.Name); objSourceItem.CopyTo(ItemURL); objSourceItem.UnlinkFromCopySource(); } } } } Thanks

    Read the article

  • Problem with Silverlight 3 projects in Web Developer Express 2008

    - by MNT
    Hi, I have a strange problem when working with silverlight 3 projects in Web Developer Express 2008. Mainly, I cannot show the design view of a XAML file. Also the XAML files (markup) are shown as plain text files (No Syntax coloring & No Intellisense). However I can write an application that is compiled and run successfully. I have the following installed on my machine: Windows XP SP3 Visual Web Developer Express 2008 SP1 & .NET 3.5 SP1 SL3 Requirements I had a few problems while installing SL3 SDK & Tool for VS. I repeated the process many times until the installation succeeded. The main problem was in the "SL Tools for VS" installation where I used to get an error in the middle. My workaround was to extract the MSI file and manually run the VWDxxx installer from the extracted files. Is this the cause of the problem? Kindly advise as it's impractical to work w/o Intellisense. Thnak you

    Read the article

  • Eclipse (Aptana) Typing Lag

    - by Zack
    Hello SO, I've been using Aptana for some time now, and as of recent I've been dealing with files that are really, really big (500+ lines of code, which is huge for me, being a novice developer). Whenever I deal with smaller files, I get that weird sensation that I'm "in front of" what's typing, but now I'm quite sure of it--there is a significant lag between when I type something and when I see the text appear on screen. I don't have this issue with Dreamweaver CS3, so I know my computer has the capability to edit these files without this happening, but Eclipse still lags. I also don't see when something is being deleted if I hold down backspace, I see the first few characters get deleted, but then everything "hangs." Once I release the backspace key, the characters that would've been shown deleting instantly vanish all at once. The same thing happens with the forward delete key. I'm beginning to think this is an issue with Java, since I have the same feeling that everything is slightly "behind me" when I'm using -any- Java application. The computer is an intel Pentium 4 3.2 GHz Prescott, with 2GB's of DDR400 RAM and a Radeon HD3650 graphics card. If anyone knows how to fix this lagging issue, I'm all ears (eyes?); if anyone can recommend a different IDE with capabilities similar to Aptana (I do Python, HTML, CSS and JS; I use Git for SCM), I'd be glad to give it a try. Thanks!

    Read the article

  • How to I display results of phpcs in VIM?

    - by Matt
    I am presently trying to use PHP Codesniffer (PEAR) in vim for PHP Files. I have found 2 sites that give code to add into the $HOME/.vim/plugin/phpcs.vim file. I have added the code and I "think" it is working, but I cannot see the results, I only see one line at the very bottom of vim that says (1 of 32) but I cannot see any of the 32 errors. Here is my .vimrc file " Backup Options -> Some People may not want this... it generates extra files set backup " Enable Backups set backupext=.bak " Add .bak extention to modified files set patchmode=.orig " Copy original file to with .orig extention Before saving. " Set Tabs and spacing for PHP as recomended by PEAR and Zend set expandtab set shiftwidth=4 set softtabstop=4 set tabstop=4 " Set Auto-indent options set cindent set smartindent set autoindent " Show lines that exceed 80 characters match ErrorMsg '\%80v.\+' " Set Colors set background=dark " Show a status bar set ruler set laststatus=2 " Set Search options highlight, and wrap search set hls is set wrapscan " File Type detection filetype on filetype plugin on " Enable Spell Checking set spell " Enable Code Folding set foldenable set foldmethod=syntax " PHP Specific options let php_sql_query=1 " Highlight sql in php strings let php_htmlInStrings=1 " Highlight HTML in php strings let php_noShortTags=1 " Disable PHP Short Tags let php_folding=1 " Enable Ability to FOLD html Code I have tried 2 different versions of phpcs.vim, and I get the same results for both: Version 1 (found at: VIM an a PHP IDE) function! RunPhpcs() let l:filename=@% let l:phpcs_output=system('phpcs --report=csv --standard=YMC '.l:filename) " echo l:phpcs_output let l:phpcs_list=split(l:phpcs_output, "\n") unlet l:phpcs_list[0] cexpr l:phpcs_list cwindow endfunction set errorformat+=\"%f\"\\,%l\\,%c\\,%t%*[a-zA-Z]\\,\"%m\" command! Phpcs execute RunPhpcs() Version 2: (found at Integrated PHP Codesniffer in VIM ) function! RunPhpcs() let l:filename=@% let l:phpcs_output=system('phpcs --report=csv --standard=YMC '.l:filename) let l:phpcs_list=split(l:phpcs_output, "\n") unlet l:phpcs_list[0] cexpr l:phpcs_list cwindow endfunction set errorformat+="%f"\\,%l\\,%c\\,%t%*[a-zA-Z]\\,"%m" command! Phpcs execute RunPhpcs() Both of these produce identical results. phpcs is installed on my system, and I am able to generate results outside of vim. Any help would be appreciated I am just learning more about vim...

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

  • Flash - Uploading to and Downloading from localhost

    - by Md Derf
    I have an online flash application that acts as a front end for a server application built in delphi. The server can be installed/used on a remote computer or a personal version can be downloaded and the Flash app pointed at localhost to use it. However, Flash has issues with using the POST and GET functions on the localhost, which makes uploading data files and downloading results files difficult. To get past the difficulty with downloading results files I'm planning to just have the server app serve the results file as an attachment and have the Flash app open the address of the file up in another browser window using external interface. First off, is this likely to cause similar security issues? I.E. Flash will see "localhost" in the external interface call and stop it from working the same as when I try to use POST/GET functions with localhost? Secondly, for upload this seems just a bit little trickier, I'm planning on doing something similar, having flash use external interface to open a php script for a file upload. Is this feasible and, again, will Flash still have security issues? Lastly, if anyone knows how to get flash to execute POST and GET functions with localhost addresses, I'd love to have that information to avoid all this jumping through hoops.

    Read the article

  • How do I run a command as a different user from a root cronjob?

    - by rob
    I seem to be stuck between an NFS limitation and a Cron limitation. So I've got root cron (on RHEL5) running a shell script that, among other things, needs to rsync some files over an NFS mount. And the files on the NFS mount are owned by the apache user with mode 700, so only the apache user can run the rsync command -- running as root yields a permission error (NFS being a rare case, apparently, where the root user is not all-powerful?) When I just want to run the rsync by hand, I can use "sudo -u apache rsync ..." But sudo no workie in cron -- it says "sudo: sorry, you must have a tty to run sudo". I don't want to run the whole script as apache (i.e. from apache's crontab) because other parts of the script do require root -- it's just that one command that needs to run as apache. And I would really prefer not to change the mode on the files, as that will involve significant changes to other applications. There's gotta be a way to accomplish "sudo -u apache" from cron?? thanks! rob

    Read the article

  • How to properly force a Blackberry Java application to install using Loader.exe

    - by Kevin White
    I want to include the Application Loader process in a software installation, to ensure that users get our software installed on their Blackberry by the time our installer software finishes. I know this is possible, because Aerize Card Loader (http://aerize.com/blackberry/software/loader/) does this. When you install their software, if your Blackberry is connected the Application Loader will come up and force the .COD file to install to the device. I can't make it work. Looking at RIM's own documentation, I need to: Place the ALX and COD files into a subfolder here: C:\Program Files\Common Files\Research In Motion\Shared\Applications\ Add a path to the ALX file in HKCU\Software\Research In Motion\Blackberry\Loader\Packages Index the application, by executing this at the command line: loader.exe /index Start the force load, by doing this: loader.exe /defaultUSB /forceload When I execute that last command, the Application Loader comes up and says that all applications are up to date and nothing needs to be done. If I execute loader.exe by double-clicking on it (or typing in the command with no parameters), I get the regular Application Loader wizard. It shows my program as listed, but un-checked. If I check it and click next, it will install to the Blackberry. (This is the part that I want to avoid, and that Aerize Card Loader's install process avoids.) What am I missing? It appears that the Aerize installer is doing something different but I haven't been able to ascertain what.

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >