Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 93/1620 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Converting Multiple files to zip and saving them in ownCloud

    - by user1055380
    I wanted to convert an array with some css, js and html files into a zip file and save them in ownCloud (it has it's own framework but it's knowledge is not required.) What I am saving is an infinite loop of zip files, as in, a zip inside a zip so I can't even check that the code is working correctly or not. Please help. Here is the link to the code. <?php /* creates a compressed zip file */ $filename = $_GET["filename"]; function create_zip($files = array(),$destination = '',$overwrite = false) { //if the zip file already exists and overwrite is false, return false if(file_exists($destination) && !$overwrite) { return false; } //vars $valid_files = array(); //if files were passed in... if(is_array($files)) { //cycle through each file foreach($files as $file => $local) { //make sure the file exists if(file_exists($file)) { $valid_files[$file] = $local; } } } //if we have good files... if(count($valid_files)) { //create the archive $zip = new ZipArchive(); if($zip->open($destination,$overwrite ? ZIPARCHIVE::OVERWRITE : ZIPARCHIVE::CREATE) !== true) { return false; } //add the files foreach($valid_files as $file => $local) { $zip->addFile($file, $local); } //debug //echo 'The zip archive contains ',$zip->numFiles,' files with a status of ',$zip->status; //close the zip -- done! $zip->close(); //check to make sure the file exists return file_exists($destination); } else { return false; } } $files_to_zip = array( 'apps/impressionist/css/mappingstyle.css' => '/css/mappingstyle.css', 'apps/impressionist/css/style.css' => '/css/style.css', 'apps/impressionist/js/jquery.js' => '/scripts/jquery.js', 'apps/impressionist/js/impress.js' => '/scripts/impress.js', realpath('apps/impressionist/output/'.$filename.'.html') => $filename.'.html' ); //if true, good; if false, zip creation failed $result = create_zip($files_to_zip, $filename.'.zip'); $save_file = OC_App::getStorage('impressionist'); $save_file ->file_put_contents($filename.'.zip',$files_to_zip); ?>

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • binary_search not working for a vector<string>

    - by VaioIsBorn
    #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; int main(void) { string temp; vector<string> encrypt, decrypt; int i,n, co=0; cin >> n; for(i=0;i<n;i++) { cin >> temp; encrypt.push_back(temp); } for(i=0;i<n;i++) { cin >> temp; decrypt.push_back(temp); } for(i=0;i<n;i++) { temp = encrypt[i]; if((binary_search(decrypt.begin(), decrypt.end(), temp)) == true) ++co; } cout << co << endl; return 0; } It reads two equal lists of strings and should print out how many of the words in the first list are also found in the second list, simple. Not giving me the expexted results and i think the problem is in binary_search. Can you tell me why ?

    Read the article

  • Right rotate of tree in Haskell: how is it work?

    - by Roman
    I don't know haskell syntax, but I know some FP concepts (like algebraic data types, pattern matching, higher-order functions ect). Can someone explain please, what does this code mean: data Tree ? = Leaf ? | Fork ? (Tree ?) (Tree ?) rotateR tree = case tree of Fork q (Fork p a b) c -> Fork p a (Fork q b c) As I understand, first line is something like Tree-type declaration (but I don't understand it exactly). Second line includes pattern matching (I don't understand as well why do we need to use pattern matching here). And third line does something absolutely unreadable for non-haskell developer. I've found definition of Fork as fork (f,g) x = (f x, g x) but I can't move further anymore.

    Read the article

  • Homemade fstat to get file size, always return 0 length.

    - by Fred
    Hello, I am trying to use my own function to get the file size from a file. I'll use this to allocate memory for a data structure to hold the information on the file. The file size function looks like this: long fileSize(FILE *fp){ long start; fflush(fp); rewind(fp); start = ftell(fp); return (fseek(fp, 0L, SEEK_END) - start); } Any ideas what I'm doing wrong here?

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Object to Network serialization - with an existing protocol

    - by cpf
    I'm writing a client for a server program written in C++. As is not unusual, all the networking protocol is in a format where packets can be easily memcopied into/out of a C++ structure (1 byte packet code, then different arrangements per packet type). I could do the same thing in C#, but is there an easier way, especially considering lots of the data is fixed-length char arrays that I want to play with as strings? Or should I just suck it up and convert types as needed? I've looked at using the ISerializable interface, but it doesnt look as low level as is required.

    Read the article

  • make tree in scheme

    - by ???
    (define (entry tree) (car tree)) (define (left-branch tree) (cadr tree)) (define (right-branch tree) (caddr tree)) (define (make-tree entry left right) (list entry left right)) (define (mktree order items_list) (cond ((= (length items_list) 1) (make-tree (car items_list) '() '())) (else (insert2 order (car items_list) (mktree order (cdr items_list)))))) (define (insert2 order x t) (cond ((null? t) (make-tree x '() '())) ((order x (entry t)) (make-tree (entry t) (insert2 order x (left-branch t)) (right-branch t))) ((order (entry t) x ) (make-tree (entry t) (left-branch t) (insert2 order x (right-branch t)))) (else t))) The result is: (mktree (lambda (x y) (< x y)) (list 7 3 5 1 9 11)) (11 (9 (1 () (5 (3 () ()) (7 () ()))) ()) ()) But I'm trying to get: (7 (3 (1 () ()) (5 () ())) (9 () (11 () ()))) Where is the problem?

    Read the article

  • iPhone SDK - How is data shared in a unversal app

    - by norskben
    Stack overflow I want to make a universal version of my app available, but I am wondering how is data managed between the iPad and the iPhone versions? -Are they completely independent? or if I have a plist in the iPad app, does it also appear in the iPhone app. If so, is there any syncing etc etc. I have a few months experience with single iPad or iPhone apps, but never a universal. Thanks again.

    Read the article

  • How can I access the sign bit of a number in C++?

    - by Keand64
    I want to be able to access the sign bit of a number in C++. My current code looks something like this: int sign bit = number >> 31; That appears to work, giving me 0 for positive numbers and -1 for negative numbers. However, I don't see how I get -1 for negative numbers: if 12 is 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1100 then -12 is 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 0011 and shifting it 31 bits would make 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 which is 1, not -1, so why do I get -1 when I shift it?

    Read the article

  • Execute a dll function in ASP.Net Bin not working, II7.

    - by Wayne Lo
    I am developing a remote control application where a client (aspx page in a browser) can request a server to "launch a notepad" (for testing purpose, for real life, turning off a light bulb, etc). So I created a dll with a simple function for launching the notepad (on the server side) and dropped this dll in the root bin folder. It worked fine when the aspx page is running under ASP.NET development server (launched from Visual Studio). But when I tested the same aspx page under a FireFox browser, it did not work (launch the notepad) even though it did call for the same function (I stepped through in debugger). Is this a permission issue? How do I set this up in IIS manager, or even better in web.config? Please help.

    Read the article

  • IndexOutOfRangeException when a stream is a multiple of the buffer size

    - by dnord
    I don't have a lot of experience with streams and buffers, but I'm having to do it for a project, and I'm stuck on an exception being thrown when the stream I'm reading is a multiple of the buffer size I've chosen. Let me show you: My code starts by reading bufferSize (100, let's say) bytes from the stream: numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); Then, I loop through a while loop: while (numberOfBytesRead == bufferSize) { BufferWriter.Write(output); BufferWriter.Flush(); index += bufferSize; numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); } ... and, once we get to a non-bufferSize read, we know we've hit the end of the stream and can move on. But if the bufferSize is 100, and the stream is 200, we'll read positions 0-99, 100-199, and then the attempt to read 200-299 errors out. I'd like it if it returned 0, but it throws an error. What I'm doing to handle that is, well, a try-catch: catch (System.IndexOutOfRangeException) numberOfBytesRead = 0; ...which ends the loop, and successfully finishes the thing, but we all know I don't want to control code flow with error handling. Is there a better (more standard?) way to handle stream reading when the stream length is unknown? This seems like a small wrinkle in a fairly reasonable strategy for reading streams, but I just don't know if I've got it wrong or what. The specifics of this (which I've cleaned up a little bit for posting) are a MySqlDataReader hitting a LARGEBLOB column. It's working whenever the buffer is larger than the number of returned bytes, or when the number of returned bytes is not a multiple of bufferSize. Because we don't, in that case, throw an IndexOutOfRangeException.

    Read the article

  • C++ vector and segmentation faults

    - by Headspin
    I am working on a simple mathematical parser. Something that just reads number = 1 + 2; I have a vector containing these tokens. They store a type and string value of the character. I am trying to step through the vector to build an AST of these tokens, and I keep getting segmentation faults, even when I am under the impression my code should prevent this from happening. Here is the bit of code that builds the AST: struct ASTGen { const vector<Token> &Tokens; unsigned int size, pointer; ASTGen(const vector<Token> &t) : Tokens(t), pointer(0) { size = Tokens.size() - 1; } unsigned int next() { return pointer + 1; } Node* Statement() { if(next() <= size) { switch(Tokens[next()].type) { case EQUALS : Node* n = Assignment_Expr(); return n; } } advance(); } void advance() { if(next() <= size) ++pointer; } Node* Assignment_Expr() { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } Node* Expression() { if(next() <= size) { advance(); if(Tokens[next()].type == SEMICOLON) { Node* n = new Node(Tokens[pointer], NULL, NULL); return n; } if(Tokens[next()].type == PLUS) { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } } } }; ... ASTGen AST(Tokens); Node* Tree = AST.Statement(); cout << Tree->Right->Data.svalue << endl; I can access Tree->Data.svalue and get the = Node's token info, so I know that node is getting spawned, and I can also get Tree->Left->Data.svalue and get the variable to the left of the = I have re-written it many times trying out different methods for stepping through the vector, but I always get a segmentation fault when I try to access the = right node (which should be the + node) Any help would be greatly appreciated.

    Read the article

  • Balanced Search Tree Query, Asymtotic Analysis..

    - by AGeek
    Hi, The situation is as follows:- We have n number and we have print them in sorted order. We have access to balanced dictionary data structure, which supports the operations serach, insert, delete, minimum, maximum each in O(log n) time. We want to retrieve the numbers in sorted order in O(n log n) time using only the insert and in-order traversal. The answer to this is:- Sort() initialize(t) while(not EOF) read(x) insert(x,t); Traverse(t); Now the query is if we read the elements in time "n" and then traverse the elements in "log n"(in-order traversal) time,, then the total time for this algorithm (n+logn)time, according to me.. Please explain the follow up of this algorithm for the time calculation.. How it will sort the list in O(nlogn) time?? Thanks.

    Read the article

  • How to optimize class for viewstate

    - by Jeremy
    If I have an object I need to store in viewstate, what kinds of things can I do to optimize the size it takes to store the object? Obviously storing the least amount of data will take less space, but aside from that, are there ways to architect the class, properties, attrbutes etc, that will effect how large the serialized output is?

    Read the article

  • Worst Case number of rotations for BST to AVL algorithm?

    - by spacker_lechuck
    I have a basic algorithm below and I know that the worst case input BST is one that has degenerated to a linked list from inserts to only one side. How would I compute the worst case complexity in terms of number of rotations for this BST to AVL conversion algorithm? IF tree is right heavy { IF tree's right subtree is left heavy { Perform Double Left rotation } ELSE { Perform Single Left rotation } } ELSE IF tree is left heavy { IF tree's left subtree is right heavy { Perform Double Right rotation } ELSE { Perform Single Right rotation } }

    Read the article

  • How to take a collection of bytes and pull typed values out of it?

    - by Pat
    Say I have a collection of bytes var bytes = new byte[] {0, 1, 2, 3, 4, 5, 6, 7}; and I want to pull out a defined value from the bytes as a managed type, e.g. a ushort. What is a simple way to define what types reside at what location in the collection and pull out those values? One (ugly) way is to use System.BitConverter and a Queue or byte[] with an index and simply iterate through, e.g.: int index = 0; ushort first = System.BitConverter.ToUint16(bytes, index); index += 2; // size of a ushort int second = System.BitConverter.ToInt32(bytes, index); index += 4; ... This method gets very, very tedious when you deal with a lot of these structures! I know that there is the System.Runtime.InteropServices.StructLayoutAttribute which allows me to define the locations of types inside a struct or class, but there doesn't seem to be a way to import the collection of bytes into that struct. If I could somehow overlay the struct on the collection of bytes and pull out the values, that would be ideal. E.g. Foo foo = (Foo)bytes; // doesn't work because I'd need to implement the implicit operator ushort first = foo.first; int second = foo.second; ... [StructLayout(LayoutKind.Explicit, Size=FOO_SIZE)] public struct Foo { [FieldOffset(0)] public ushort first; [FieldOffset(2)] public int second; } Any thoughts on how to achieve this?

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • Bitwise operations in BC?

    - by user355926
    $ bc BC> ibase=2 BC> 110&101 // wanna get 100 (standar_in) 8: syntax error Wikipedia informs that the ops are "|, & and ^". It may be that they work only in certain BC-types or I misread something.

    Read the article

  • why IEEE floating point number calculate exponent using a biased form?

    - by lenatis
    let's say, for the float type in c, according to the IEEE floating point specification, there are 8-bit used for the fraction filed, and it is calculated as first taken these 8-bit and translated it into an unsigned number, and then minus the BIASE, which is 2^7 - 1 = 127, and the result is an exponent ranges from -127 to 128, inclusive. But why can't we just treat these 8-bit pattern as a signed number, since the resulting range is [-128,127], which is almost the same as the previous one.

    Read the article

  • Delete an object from a tree

    - by mqpasta
    I have a Find function in order to find an element from a BST private Node Find(ref Node n, int e) { if (n == null) return null; if (n.Element == e) return n; if (e > n.Element) return Find(ref n.Right, e); else return Find(ref n.Left, e); } and I use following code in order to get a node and then set this node to null. Node x = bsTree.Find(1); x = null; bsTree.Print(); supposedly, this node should be deleted from Tree as it is set to null but it still exists in tree. I had done this before but this time missing something and no idea what.

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

  • iPhone - How to use #define in Universal app

    - by Satyam svv
    I'm creating universal app that runs oniphone and ipad. I'm using #define to create CGRect. And I want to use two different #define - one for iPhone and one for iPad. How can I declare them so that correct one will be picked by universal app.......... I think I've to update little more description to avoid confusion. I've a WPConstants.h file where I'm declaring all the #define as below #define PUZZLE_TOPVIEW_RECT CGRectMake(0, 0, 480, 100) #define PUZZLE_MIDDLEVIEW_RECT CGRectMake(0, 100, 480, 100) #define PUZZLE_BOTTOMVIEW_RECT CGRectMake(0, 200, 480, 100) The above ones are for iphone. Similarly for iPad I want to have different #define How can I proceed further?

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >