Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 66/1979 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • How do I use Group Policy on a domain to delete Temporary Internet Files?

    - by Muhammad Ali
    I have a domain controller running on Windows 2008 Server R2 and users login to application servers on which Windows 2003 Server SP2 is installed. I have applied a Group Policy to clean temporary internet files on exit i.e to delete all temporary internet files when users close the browser. But the group policy doesn't seem to work as user profile size keeps on increasing and the major space is occupied by temporary internet files therefore increasing the disk usage. How can i enforce automatic deletion of temporary internet files?

    Read the article

  • using a "temporary files" folder in python

    - by zubin71
    I recently wrote a script which queries PyPI and downloads a package; however, the package gets downloaded to a user defined folder. I`d like to modify the script in such a way that my downloaded files go into a temporary folder, if the folder is not specified. The temporary-files folder in *nix machines is "/tmp" ; would there be any Python method I could use to find out the temporary-files folder in a particular machine? If not, could someone suggest an alternative to this problem?

    Read the article

  • Help with malloc and free: Glibc detected: free(): invalid pointer

    - by nunos
    I need help with debugging this piece of code. I know the problem is in malloc and free but can't find exactly where, why and how to fix it. Please don't answer: "Use gdb" and that's it. I would use gdb to debug it, but I still don't know much about it and am still learning it, and would like to have, in the meanwhile, another solution. Thanks. #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <unistd.h> #include <string.h> #include <sys/wait.h> #include <sys/types.h> #define MAX_COMMAND_LENGTH 256 #define MAX_ARGS_NUMBER 128 #define MAX_HISTORY_NUMBER 100 #define PROMPT ">>> " int num_elems; typedef enum {false, true} bool; typedef struct { char **arg; char *infile; char *outfile; int background; } Command_Info; int parse_cmd(char *cmd_line, Command_Info *cmd_info) { char *arg; char *args[MAX_ARGS_NUMBER]; int i = 0; arg = strtok(cmd_line, " "); while (arg != NULL) { args[i] = arg; arg = strtok(NULL, " "); i++; } num_elems = i;precisa em free_mem if (num_elems == 0) return 0; cmd_info->arg = (char **) ( malloc(num_elems * sizeof(char *)) ); cmd_info->infile = NULL; cmd_info->outfile = NULL; cmd_info->background = 0; bool b_infile = false; bool b_outfile = false; int iarg = 0; for (i = 0; i < num_elems; i++) { if ( !strcmp(args[i], "<") ) { if ( b_infile || i == num_elems-1 || !strcmp(args[i+1], "<") || !strcmp(args[i+1], ">") || !strcmp(args[i+1], "&") ) return -1; i++; cmd_info->infile = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->infile, args[i]); b_infile = true; } else if (!strcmp(args[i], ">")) { if ( b_outfile || i == num_elems-1 || !strcmp(args[i+1], ">") || !strcmp(args[i+1], "<") || !strcmp(args[i+1], "&") ) return -1; i++; cmd_info->outfile = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->outfile, args[i]); b_outfile = true; } else if (!strcmp(args[i], "&")) { if ( i == 0 || i != num_elems-1 || cmd_info->background ) return -1; cmd_info->background = true; } else { cmd_info->arg[iarg] = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->arg[iarg], args[i]); iarg++; } } cmd_info->arg[iarg] = NULL; return 0; } void print_cmd(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("infile=\"%s\"\n", cmd_info->infile); printf("outfile=\"%s\"\n", cmd_info->outfile); printf("background=\"%d\"\n", cmd_info->background); } void get_cmd(char* str) { fgets(str, MAX_COMMAND_LENGTH, stdin); str[strlen(str)-1] = '\0'; } pid_t exec_simple(Command_Info *cmd_info) { pid_t pid = fork(); if (pid < 0) { perror("Fork Error"); return -1; } if (pid == 0) { if ( (execvp(cmd_info->arg[0], cmd_info->arg)) == -1) { perror(cmd_info->arg[0]); exit(1); } } return pid; } void type_prompt(void) { printf("%s", PROMPT); } void syntax_error(void) { printf("msh syntax error\n"); } void free_mem(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) free(cmd_info->arg[i]); free(cmd_info->arg); free(cmd_info->infile); free(cmd_info->outfile); } int main(int argc, char* argv[]) { char cmd_line[MAX_COMMAND_LENGTH]; Command_Info cmd_info; //char* history[MAX_HISTORY_NUMBER]; while (true) { type_prompt(); get_cmd(cmd_line); if ( parse_cmd(cmd_line, &cmd_info) == -1) { syntax_error(); continue; } if (!strcmp(cmd_line, "")) continue; if (!strcmp(cmd_info.arg[0], "exit")) exit(0); pid_t pid = exec_simple(&cmd_info); waitpid(pid, NULL, 0); free_mem(&cmd_info); } return 0; }

    Read the article

  • What is the fastest way to create a checksum for large files in C#

    - by crono
    Hi, I have to sync large files across some machines. The files can be up to 6GB in size. The sync will be done manually every few weeks. I cant take the filename into consideration because they can change anytime. My plan is to create checksums on the destination PC and on the source PC and than copy all files with a checksum, which are not already in the destination, to the destination. My first attempt was something like this: using System.IO; using System.Security.Cryptography; private static string GetChecksum(string file) { using (FileStream stream = File.OpenRead(file)) { SHA256Managed sha = new SHA256Managed(); byte[] checksum = sha.ComputeHash(stream); return BitConverter.ToString(checksum).Replace("-", String.Empty); } } The Problem was the runtime: - with SHA256 with a 1,6 GB File - 20 minutes - with MD5 with a 1,6 GB File - 6.15 minutes Is there a better - faster - way to get the checksum (maybe with a better hash function)?

    Read the article

  • Configure ASP.NET Application to Read Mapped Network Drive

    - by Bob
    Is it possible to configure an ASP.NET application under IIS 7 so it can read files stored in a mapped network drive? I'm not trying to serve up the contents of the drive. I simply need to read the contents within the ASP.NET application. I've searched the Web and haven't really found a solid answer. The questions in my mind are: Is this possible via configuration (i.e. I cannot modify the client code)? If so, what are the step by step instructions. If it is not possible, I'm fine with that. I already know UNC paths work but using them drastically changes the work flow. Thanks! Bobby

    Read the article

  • Cannot run setups from a mapped network drive on Windows 7

    - by Dimitri C.
    I'm trying to run an application setup by double-clicking the setup.exe from within Windows Explorer. The file is located on a mapped network drive, and I'm using Windows 7. This results in the following error message: The specified path does not exist. Check the path, and then try again. The workaround I found is to copy the installer to the main hard drive (c:) and run it from there; however, this is rather inconvenient. The same action did work on Windows 2000, Windows XP and Windows Vista. I have the impression that the problem only occurs with installers, as everything seemed to work fine with regular exe's. Is there anyone who can explain this odd behavior?

    Read the article

  • NSMutableDictionary, alloc, init and reiniting...

    - by Marcos Issler
    In the following code: //anArray is a Array of Dictionary with 5 objs. //here we init with the first NSMutableDictionary *anMutableDict = [[NSMutableDictionary alloc] initWithDictionary:[anArray objectAtIndex:0]]; ... use of anMutableDict ... //then want to clear the MutableDict and assign the other dicts that was in the array of dicts for (int i=1;i<5;i++) { [anMutableDict removeAllObjects]; [anMutableDict initWithDictionary:[anArray objectAtIndex:i]]; } Why this crash? How is the right way to clear an nsmutabledict and the assign a new dict? Thanks guy's. Marcos.

    Read the article

  • GNU make copy files to distro directory

    - by TheRoadrunner
    I keep my source html (and images etc.) in separate directories for source control. Part of making the distro is to have make copy files to output folder and set the attributes. Today my makefile shows (extract): %.html: /usr/bin/install -c -p -m 644 $< $@ www: $(HTMLDST)/firmware.html $(HTMLDST)/firmware_status.html $(HTMLDST)/index.html $(HTMLDST)/firmware.html: $(HTMLSRC)/firmware.html $(HTMLDST)/firmware_status.html: $(HTMLSRC)/firmware_status.html $(HTMLDST)/index.html: $(HTMLSRC)/index.html This is shown with only three html files, but in reality, there are lots. I would like to just list the filenames (without paths) and have make do the comparison between source and destination and copy the files that have been updated. Thank you in advance Søren

    Read the article

  • Applications don't see mapped network drive after waking from sleep

    - by Chris
    I have a network share mapped as a drive letter on a Windows 7 PC that's part of a simple home network. When I wake the PC from sleep, Explorer sometimes says the drive is disconnected. Doubleclicking the drive connects it instantly, as far as Explorer is concerned. But some other applications don't see the drive unless I take additional steps (e.g., navigate to the drive in an "Open..." dialog). Is there any way to get the drive to be connected for all purposes immediately when resuming from standby? I've found the "Always wait for the network at computer startup and logon" item in Group Policy, but I'm not sure what the setting does or whether it would fix this problem. I have network discovery enabled and un-firewalled.

    Read the article

  • Could not Upload file in network mapped drive using asp.net/vb.net

    - by Hasan
    I have tried several times to upload file remotely to a mapped network drive, but it is raising an exception: Could not find a part of the path 'X:\test\testing.wav'. I read through various internet /blog/ Microsoft help sites, but I still don't know what is wrong. Does anyone know what is causing this problem and how I can correct it? It works fine when I am uploading to a local drive as a test. It is also working When I am running the code from the development server, but if I try with published code, then it fails. :(

    Read the article

  • Trying to cat files - unrecognized wildcard

    - by Barb
    Hello, I am trying to create a file that contains all of the code of an app. I have created a file called catlist.txt so that the files are added in the order I need them. A snippet of my catlist.txt: app/controllers/application_controller.rb app/views/layouts/* app/models/account.rb app/controllers/accounts_controller.rb app/views/accounts/* When I run the command the files that are explicitly listed get added but the wildcard files do not. cat catlist.txt|xargs cat > fullcode I get cat: app/views/layouts/*: No such file or directory cat: app/views/accounts/*: No such file or directory Can someone help me with this. If there is an easier method I am open to all suggestions. Barb

    Read the article

  • cannot edit any php files using specific functions

    - by user458474
    I cannot update any txt files using php. When I write a simple code like the following: <?php // create file pointer $fp = fopen("C:/Users/jj/bob.txt", 'w') or die('Could not open file, or fike does not exist and failed to create.'); $mytext = '<b>hi. This is my test</b>'; // write text to file fwrite($fp, $mytext) or die('Could not write to file.'); $content = file("C:/Users/jj/bob.txt"); // close file fclose($fp); ?> Both files do exist in the folder. I just cannot see any updates on bob.txt. Is this a permission error in windows? It works fine on my laptop at home. I also cannot change the php files on my website, using filezilla.

    Read the article

  • An interesting case of delete and destructor (C++)

    - by Viet
    I have a piece of code where I can call destructor multiple times and access member functions even the destructor was called with member variables' values preserved. I was still able to access member functions after I called delete but the member variables were nullified (all to 0). And I can't double delete. Please kindly explain this. Thanks. #include <iostream> using namespace std; template <typename T> void destroy(T* ptr) { ptr->~T(); } class Testing { public: Testing() : test(20) { } ~Testing() { printf("Testing is being killed!\n"); } int getTest() const { return test; } private: int test; }; int main() { Testing *t = new Testing(); cout << "t->getTest() = " << t->getTest() << endl; destroy(t); cout << "t->getTest() = " << t->getTest() << endl; t->~Testing(); cout << "t->getTest() = " << t->getTest() << endl; delete t; cout << "t->getTest() = " << t->getTest() << endl; destroy(t); cout << "t->getTest() = " << t->getTest() << endl; t->~Testing(); cout << "t->getTest() = " << t->getTest() << endl; //delete t; // <======== Don't do it! Double free/delete! cout << "t->getTest() = " << t->getTest() << endl; return 0; }

    Read the article

  • VSS - Solution file between multiple users

    - by BhejaFry
    Hi folks, we have a solution with multiple projects that is being developed by a team of developers. Project paths in the solution file checked in initially contains the path that are specific to that developer. Now when another dev gets latest of the solution, some of the projects won't load as the path differs. What's a better way to manage this ? TIA

    Read the article

  • Finding leaks under GeneralBlock-16?

    - by erastusnjuki
    If ObjectAlloc cannot deduce type information for the block, it uses 'GeneralBlock'. Any strategies to get leaks from this block that may eliminate the need of my 'trial and error' methods that I use? The Extended Detail thing doesn't really do it for me as I just keep guessing.

    Read the article

  • caching static files for ruby on rails application using nginx

    - by splintercell
    I have been trying for some time to serve & cache static files for my rails app using nginx. the rails app server runs mongrel_cluster and is deployed on a different host than that of nginx. following many of the available discussions I tried the following server { listen 80; server_name www.myappserver.com; ssl on; root /var/apps/myapp/current/public; location ~ ^/(images|javascripts|stylesheets)/ { root /var/apps/myapp/current; expires 10y; } location / { proxy_pass http://myapp_upstream; } } But nginx fails to find the images and to load the css and js files. Can anyone help me out here? My aim is to configure nginx in such a way that it caches the static files till expiry. Please suggest me some way to achieve this or am I missing any point here?

    Read the article

  • How to resize a /home partition in Kubuntu?

    - by Devon
    I was distro hopping for awhile in the past few months, so in order to keep all of my files secure, I made a partition of around 50 GB named Files to store all of my files in, and still have them for quick and easy access. However, now that I've found a distribution I'm comfortable with (Kubuntu 11.10), I would like to remove this partition, and have all of my files in my /home folder, in order to more easily deal with these files. I've moved all of my files in the partition to my /home folder (and still have plenty of room to spare), and now I'm trying to delete the partition and use the space for my /home folder. I can delete the partition just fine, however, I can't extend the /home folder into the unallocated space. Here's a screenshot of what I'm talking about. In order to change the size of the /home partition, I need to unmount it. But, I am unable to unmount it! How do I best extend the size of the partition?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >