Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 191/1877 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • BASH statements execute alone but return "no such file" in for loop.

    - by reve_etrange
    Another one I can't find an answer for, and it feels like I've gone mad. I have a BASH script using a for loop to run a complex command (many protein sequence alignments) on a lot of files (~5000). The loop produces statements that will execute when given alone (i.e. copy-pasted from the error message to the command prompt), but which return "no such file or directory" inside the loop. Script below; there are actually several more arguments but this includes some representative ones and the file arguments. #!/bin/bash # Pass directory with targets as FASTA sequences as argument. # Arguments to psiblast # Common db=local/db/nr/nr outfile="/mnt/scratch/psi-blast" e=0.001 threads=8 itnum=5 pssm="/mnt/scratch/psi-blast/pssm." pssm_txt="/mnt/scratch/psi-blast/pssm." pseudo=0 pwa_inclusion=0.002 for i in ${1}/* do filename=$(basename $i) "local/ncbi-blast-2.2.23+/bin/psiblast\ -query ${i}\ -db $db\ -out ${outfile}/${filename}.out\ -evalue $e\ -num_threads $threads\ -num_iterations $itnum\ -out_pssm ${pssm}$filename\ -out_ascii_pssm ${pssm_txt}${filename}.txt\ -pseudocount $pseudo\ -inclusion_ethresh $pwa_inclusion" done Running this scripts gives "<scriptname> line <last line before 'done'>: <attempted command> : No such file or directory. If I then paste the attempted command onto the prompt it will run. Each of these commands takes a couple of minutes to run.

    Read the article

  • How to use WSDL2Java generated files?

    - by vikasde
    I generated the .java files using wsdl2java found in axis2-1.5. Now it generated the files in this folder structure: src/net/mycompany/www/services/ The files in the services folder are: SessionIntegrationStub and SessionIntegrationCallbackHandler. I would like to consume the webservice now. I added the net folder to the CLASSPATH environment variable. My java file now imports the webservice using: import net.mycompany.www.services; public class test { public static void main(String[] args) { SessionIntegrationStub stub = new SessionIntegrationStub(); System.out.println(stub.getSessionIntegration("test")); } } Now when I try to compile this using: javac test.java I get: package net.mycompany.www does not exist. Any idea?

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Declaring struct in header file

    - by wrongusername
    I've been trying to include a structure called "student" in a student.h file, but I'm not quite sure how to do it. My student.h file code consists of entirely: #include<string> using namespace std; struct Student; while the student.cpp file consists of entirely: #include<string> using namespace std; struct Student { string lastName, firstName; //long list of other strings... just strings though }; Unfortunately, files that use #include "student.h" come up with numerous errors like error C2027: use of undefined type 'Student', error C2079: 'newStudent' uses undefined struct 'Student' (where newStudent is a function with a Student parameter), and error C2228: left of '.lastName' must have class/struct/union. It appears the compiler (VC++) does not recognize struct Student from "student.h" or something? I have tried declaring the whole struct in "student.h", but it didn't help either. How can I declare struct Student in "student.h" so that I can just #include "student.h" and start using the struct? BTW, it seems there are no compiler errors in student.h...

    Read the article

  • Makefiles - Compile all .cpp files in src/ to .o's in obj/, then link to binary in /

    - by Austin Hyde
    So, my project directory looks like this: /project Makefile main /src main.cpp foo.cpp foo.h bar.cpp bar.h /obj main.o foo.o bar.o What I would like my makefile to do would be to compile all .cpp files in the /src folder to .o files in the /obj folder, then link all the .o files in /obj into the output binary in the root folder /project. The problem is, I have next to no experience with Makefiles, and am not really sure what to search for to accomplish this. Also, is this a "good" way to do this, or is there a more standard approach to what I'm trying to do?

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • Problem with image path of html files viewed by webbrowser control

    - by Royson
    I have an webbrowser control on my form. I am able display html files in that control. But my page contains some images if i give absolute path to it then images are displayed. But if i give relative path then images are not shown in the pages. I have HtmlPages folder located at bin folder. And i am assigning FileStream source = new FileStream(@"..\HtmlPages\supportHtml.html", FileMode.Open, FileAccess.Read); webBrowser.DocumentStream = source; If i assign D:\myapp\bin\HtmlPages\file.png then there is no problem. My images are stored in same folder. If i open html files with webbrowser then images are displayed. What is the correct path to set ??

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • Opening a Unicode file with Perl

    - by Jaco Pretorius
    I'm using osql to run several sql scripts against a database and then I need to look at the results file to check if any errors occurred. The problem is that perl doesn't seem to like the fact that the results files are unicode. I wrote a little test script to test it and the output comes out all warbled. $file = shift; open OUTPUT, $file or die "Can't open $file: $!\n"; while (<OUTPUT>) { print $_; if (/Invalid|invalid|Cannot|cannot/) { push(@invalids, $file); print "invalid file - $inputfile - schedule for retry\n"; last; } } Any ideas? I've tried decoding using decode_utf8 but it makes no difference. I've also tried to set the encoding when opening the file. I think the problem might be that osql puts the result file in UTF-16 format, but I'm not sure. When I open the file in textpad it just tells me 'Unicode'. Edit: Using perl v5.8.8

    Read the article

  • Does SetFileBandwidthReservation affect memory-mapped file performance?

    - by Ghostrider
    Does this function affect Memory-mapped file performance? Here's the problem I need to solve: I have two applications competing for disk access: "reader" and "updater". Whole system runs on Windows Server 2008 R2 x64 "Updater" constantly accesses disk in a linear manner, updating data. They system is set up in such a way that updater always has infinite data to update. Consider that it is constantly approximating a solution of a huge set of equations that takes up entire 2TB disk drive. Updater uses ReadFile and WriteFile to process data in a linear fashion. "Reader" is occasionally invoked by user to get some pieces of data. Usually user would read several 4kb blocks from the drive and stop. Occasionally user needs to read up to 100mb sequentially. In exceptional cases up to several gigabytes. Reader maps files to memory to get data it needs. What I would like to achieve is for "reader" to have absolute priority so that "updater" would completely stop if needed so that "reader" could get the data user needs ASAP. Is this problem solvable by using SetPriorityClass and SetFileBandwidthReservation calls? I would really hate to put synchronization login in "reader" and "updater" and rather have the OS take care of priorities.

    Read the article

  • VBScript - copy files modified in last 24 hours

    - by Martin North
    Hi, I'm trying to copy files from a directory where the last modified date is within 24hours of the current date. I'm using a wildcard in the filepath as it changes every day I'm using; option explicit dim fileSystem, folder, file dim path path = "d:\x\logs" Set fileSystem = CreateObject("Scripting.FileSystemObject") Set folder = fileSystem.GetFolder(path) for each file in folder.Files If DateDiff("d", file.DateLastModified, Now) < 1 Then filesystem.CopyFile "d:\x\logs\apache_access_log-*", "d:\completed logs\" WScript.Echo file.Name & " last modified at " & file.DateLastModified end if next Unfortunately this seems to be copying all files, and not just the recently modified ones. Can anyone point me in the right direction? many thanks Martin.

    Read the article

  • Changing where a resource is pulled during runtime?

    - by Brandon
    I have a website that goes out to multiple clients. Sometimes a client will insist on minor changes. For reasons beyond my control, I have to comply no matter how minor the request. Usually this isn't a problem, I would just create a client specific version of the user control or page and overwrite the default one during build time or make a configuration setting to handle it. Now that I am localizing the site, I'm curious about the best way to go about making minor wording changes. Lets say I have a resource file called Resources.resx that has 300 resources in it. It has a resource called Continue. English value is "Continue", the French value is "Continuez". Now one client, for whatever reason, wants it to say "Next" and "Après" and the others want to keep it the same. What is the best way to accomodate a request like this? (This is just a simple example). The only two ways I can think of is to Create another Resources.resx specific to the client, and replace the .dll during build time. Since I'd be completely replacing the dll, the new resource file would have to contain all 300 strings. The obvious problem being that I now have 2 resource files, each with 300 strings to maintain. Create a custom user control/page and change it to use a custom resource file. e.g. SignIn.ascx would be replaced during the build and it would pull its resources from ClientName.resx instead of Resources.resx. Are there any other things I could try? Is there any way to change it so that the application will always look in a ClientResources.resx file for the overridden values before actually look at the specified resource file?

    Read the article

  • SVN is not recognizing the changed files

    - by Tom Brito
    I had made changes in a folder called "branch", and now that its working I want to move all the src folder to the folder "trunk". But by copying src from local branch, pasting on local trunk and commiting the SVN commits nothing. Its like nothing had changed. Any idea how to commit this? Related question: http://stackoverflow.com/questions/206183/how-can-i-force-subversion-to-commit-an-unchanged-file (Would work for me, but I know nothing about properties, exactly which one could I change and with which value to not crash something?)

    Read the article

  • discovering files in the FileSystem, through SSIS

    - by cometbill
    I have a folder where files are going to be dropped for importing into my data warehouse. \\server\share\loading_area I have the following (inherited) code that uses xp_cmdshell shivers to call out to the command shell to run the DIR command and insert the resulting filenames into a table in SQL Server. I would like to 'go native' and reproduce this functionality in SSIS. Thanks in advance guys and girls. Here's the code USE MyDatabase GO declare @CMD varchar(500) declare @EXTRACT_PATH varchar(255) set @EXTRACT_PATH = '\\server\share\folder\' create table tmp_FILELIST([FILENUM] int identity(1,1), [FNAME] varchar(100), [FILE_STATUS] varchar(20) NULL CONSTRAINT [DF_FILELIST_FILE_STATUS] DEFAULT ('PENDING')) set @CMD = 'dir ' + @EXTRACT_PATH + '*.* /b /on' insert tmp_FILELIST([FNAME]) exec master..xp_cmdshell @CMD --remove the DOS reply when the folder is empty delete tmp_FILELIST where [FNAME] is null or [FNAME] = 'File Not Found' --Remove my administrative and default/common, files not for importing, such as readme.txt delete tmp_FILELIST where [FNAME] is null or [FNAME] = 'readme.txt'

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • When and How to Delete temporary uploaded but uncommitted files in ASP.NET

    - by slowlycooked
    I'm using EO Ajax toolkits for upload files, the file is uploaded to server. when user click save then it will update the database for what been uploaded or changed. Now i need a clean up process that when user uploaded a file to server, but then close his/her browser before click the save button. in this case how should i programe so that the file user just uploaded is deleted, because it's now useless and not assosiated with any database. thanks. maybe i should upload all files to a temp folder, only when user click saver the file can be moved to target folder, at each session end the temp folder is get deleted.

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • Where should I store my App specific config files in WPF

    - by Akash Deshpande
    Background : I have some Application Data. i.e. the Database, come important config files. This data is vital for the application to start else it is exited. Problem : Where should I store this data. i.e in which folder and where. Right Now (This is wrong) it is stored in a folder in Debug/App_Data. But is causing issues in git and when we publish the App the data is not found. So where can we store this folder ? Present Structure is "WpfApplication2\WpfApplication2\bin\Debug" These Files need to be present when the app is started. So they need to be a part of the app itself.

    Read the article

  • Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    - by Jason Fitzpatrick
    If you’re currently using any 64-bit version of Windows you may have noticed there are two “Program Files” folders, one for 64-bit and one for 32-bit apps. Why does Windows need to sub-divide them? Read on to see why. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • ISACA Webcast follow up: Managing High Risk Access and Compliance with a Platform Approach to Privileged Account Management

    - by Darin Pendergraft
    Last week we presented how Oracle Privileged Account Manager (OPAM) could be used to manage high risk, privileged accounts.  If you missed the webcast, here is a link to the replay: ISACA replay archive (NOTE: you will need to use Internet Explorer to view the archive) For those of you that did join us on the call, you will know that I only had a little bit of time for Q&A, and was only able to answer a few of the questions that came in.  So I wanted to devote this blog to answering the outstanding questions.  Here they are. 1. Can OPAM track admin or DBA activity details during a password check-out session? Oracle Audit Vault is monitoring these activities which can be correlated to check-out events. 2. How would OPAM handle simultaneous requests? OPAM can be configured to allow for shared passwords.  By default sharing is turned off. 3. How long are the passwords valid?  Are the admins required to manually check them in? Password expiration can be configured and set in the password policy according to your corporate standards.  You can specify if you want forced check-in or not. 4. Can 2-factor authentication be used with OPAM? Yes - 2-factor integration with OPAM is provided by integration with Oracle Access Manager, and Oracle Adaptive Access Manager. 5. How do you control access to OPAM to ensure that OPAM admins don't override the functionality to access privileged accounts? OPAM provides separation of duties by using Admin Roles to manage access to targets and privileged accounts and to control which operations admins can perform. 6. How and where are the passwords stored in OPAM? OPAM uses Oracle Platform Security Services (OPSS) Credential Store Framework (CSF) to securely store passwords.  This is the same system used by Oracle Applications. 7. Does OPAM support hierarchical/level based privileges?  Is the log maintained for independent review/audit? Yes. OPAM uses the Fusion Middleware (FMW) Audit Framework to store all OPAM related events in a dedicated audit database.  8. Does OPAM support emergency access in the case where approvers are not available until later? Yes.  OPAM can be configured to release a password under a "break-glass" emergency scenario. 9. Does OPAM work with AIX? Yes supported UNIX version are listed in the "certified component section" of the UNIX connector guide at:http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 10. Does OPAM integrate with Sun Identity Manager? Yes.  OPAM can be integrated with SIM using the REST  APIs.  OPAM has direct integration with Oracle Identity Manager 11gR2. 11. Is OPAM available today and what does it cost? Yes.  OPAM is available now.  Ask your Oracle Account Manager for pricing. 12. Can OPAM be used in SAP environments? Yes, supported SAP version are listed in the "certified component section" of the SAP  connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e25327/intro.htm#autoId0 13. How would this product integrate, if at all, with access to a particular field in the DB that need additional security such as SSN's? OPAM can work with DB Vault and DB Firewall to provide the fine grained access control for databases. 14. Is VM supported? As a deployment platform Oracle VM is supported. For further details about supported Virtualization Technologies see Oracle Fusion Middleware Supported System configurations here: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 15. Where did this (OPAM) technology come from? OPAM was built by Oracle Engineering. 16. Are all Linux flavors supported?  How about BSD? BSD is not supported. For supported UNIX version see the "certified component section" of the UNIX connector guide http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 17. What happens if users don't check passwords in at the end of a work task? In OPAM a time frame can be defined how long a password can be checked out. The security admin can force a check-in at any given time. 18. is MySQL supported? Yes, supported DB version are listed in the "certified component section" of the DB connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e28315/intro.htm#BABGJJHA 19. What happens when OPAM crashes and you need to use the password? OPAM can be configured for high availability, but if required, OPAM data can be backed up/recovered.  See the OPAM admin guide. 20. Is OPAM Standalone product or does it leverage other components from IDM? OPAM can be run stand-alone, but will also leverage other IDM components

    Read the article

  • How to Handle Managing a Coding Project With 8 Friends?

    - by Raul
    I usually code by myself but currently I need to do a java web-based project with 8 of my friends. I would like to ask the following questions: 1) How to document the development properly? Like how to keep a daily log? Any software or format suggested? What things do you think are important to be included in the log? 2) How to code together? Is there any software/IDE that allows a team to code together? Something ike google docs? 3) How to do a proper backup for a team project? Any software or tips to share?

    Read the article

  • What is an effective git process for managing our central code library?

    - by Mathew Byrne
    Quick background: we're a small web agency (3-6 developers at any one time) developing small to medium sized Symfony 1.4 sites. We've used git for a year now, but most of our developers have preferred Subversion and aren't used to a distributed model. For the past 6 months we've put a lot of development time into a central Symfony plugin that powers our custom CMS. This plugin includes a number of features, helpers, base classes etc. that we use to build custom functionality. This plugin is stored in git, but branches wildly as the plugin is used in various products and is pulled from/pushed to constantly. The repository is usually used as a submodule within a major project. The problems we're starting to see now are a large number of Merge conflicts and backwards incompatible changes brought into the repository by developers adding custom functionality in the context of their own project. I've read Vincent Driessen's excellent git branching model and successfully used it for projects in the past, but it doesn't seem to quite apply well to our particular situation; we have a number of projects concurrently using the same core plugin while developing new features for it. What we need is a strategy that provides the following: A methodology for developing major features within the code repository. A way of migrating those features into other projects. A way of versioning the core repository, and of tracking which version each major project uses. A plan for migrating bug fixes back to older versions. A cleaner history that's easier to see where changes have come from. Any suggestions or discussion would be greatly appreciated.

    Read the article

  • Add separate domain name to Wordpress admin area with htaccess

    - by Marc
    I have a Wordpress installation in a seperate folder on my server (meaning it is not in the root folder). I have a htaccess rewrite rule that maps Domain A to folder A. Inside folder A is the Wordpress admin folder, let's call it folder A.B. I tried mapping Domain B to folder A.B., but I can't get it to work properly. When you log in to Wordpress via /admin, you get redirected to /wp-login.php (so from folder A.B. to folder A), maybe that is where I get into trouble. So what I would like to do is this: Domain A folder A Domain B folder A.B Note that this is not for security purposes, I just like the idea of www.domainb.com instead of www.domaina.com/wp-admin. Can this be done with Wordpress?

    Read the article

  • Any board-like platform but only for files instead of text posts? [on hold]

    - by Janwillhaus
    I am looking for a (best open-source or free but also commercially available) CMS platform that is built similarly to bulletin boards (for example phpBB) but instead of text posts, registered users can upload files that can be rated, commented etc.) I am aware of the number of board CMSes that have file-database plugins, but that is not what I want. I want to have a system that focuses on the files rather than on the postings. Or do you have any alternative ideas on solutions to the problem? I need the following functions CMS focused on file management Files that can be categorized (in a tree-like view for example) Comments and ratings can be added to files Users that can be provided various rights around the platform (moderation, commenting, exclusion of non-registered users, etc.) Users can upload files themselves (for further moderation

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >