Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 822/1620 | < Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • OmniCppComplete: Completing on Class Members which are STL containers

    - by Robert S. Barnes
    Completion on class members which are STL containers is failing. Completion on local objects which are STL containers works fine. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicppcomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members which are STL containers?

    Read the article

  • Sorting by custom field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and then files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). to display tree. Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) asc/desc After that for each result which has type = 'folder' i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such a thing?

    Read the article

  • How To Create a Flexible Plug-In Architecture?

    - by justkt
    A repeating theme in my development work has been the use of or creation of an in-house plug-in architecture. I've seen it approached many ways - configuration files (XML, .conf, and so on), inheritance frameworks, database information, libraries, and others. In my experience: A database isn't a great place to store your configuration information, especially co-mingled with data Attempting this with an inheritance hierarchy requires knowledge about the plug-ins to be coded in, meaning the plug-in architecture isn't all that dynamic Configuration files work well for providing simple information, but can't handle more complex behaviors Libraries seem to work well, but the one-way dependencies have to be carefully created. As I seek to learn from the various architectures I've worked with, I'm also looking to the community for suggestions. How have you implemented a solid plug-in architecture? What was your worst failure (or the worst failure you've seen)? What would you do if you were going to implement a new plug-in architecture? What SDK or open source project that you've worked with has the best example of a good architecture? A few examples I've been finding on my own: Perl's Module::Plugable An SO question with a list for Java (including Sever Provider Interfaces) An SO question for C++ pointing to a Dr. Dobbs article These examples seem to play to various language strengths. Is a good plugin architecture necessarily tied to the language? Is it best to use tools to create a plugin architecture, or to do it on one's own following models?

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Python: Convert format string to regular expression

    - by miracle2k
    The users of my app can configure the layout of certain files via a format string. For example, the config value the user specifies might be: layout = '%(group)s/foo-%(locale)s/file.txt' I now need to find all such files that already exist. This seems easy enough using the glob module: glob_pattern = layout % {'group': '*', 'locale': '*'} glob.glob(glob_pattern) However, now comes the hard part: Given the list of glob results, I need to get all those filename-parts that matched a given placeholder, for example all the different "locale" values. I thought I would generate a regular expression for the format string that I could then match against the list of glob results (or then possibly skipping glob and doing all the matching myself). But I can't find a nice way to create the regex with both the proper group captures, and escaping the rest of the input. For example, this might give me a regex that matches the locales: regex = layout % {'group': '.*', 'locale': (.*)} But to be sure the regex is valid, I need to pass it through re.escape(), which then also escapes the regex syntax I have just inserted. Calling re.escape() first ruins the format string. I know there's fnmatch.translate(), which would even give me a regex - but not one that returns the proper groups. Is there a good way to do this, without a hack like replacing the placeholders with a regex-safe unique value etc.? Is there possibly some way (a third party library perhaps?) that allows dissecting a format string in a more flexible way, for example splitting the string at the placeholder locations?

    Read the article

  • RESTful copy/move operations?

    - by ladenedge
    I am trying to design a RESTful filesystem-like service, and copy/move operations are causing me some trouble. First of all, uploading a new file is done using a PUT to the file's ultimate URL: PUT /folders/42/contents/<name> The question is, what if the new file already resides on the system under a different URL? Copy/move Idea 1: PUTs with custom headers. This is similar to S3's copy. A PUT that looks the same as the upload, but with a custom header: PUT /folders/42/contents/<name> X-custom-source: /files/5 This is nice because it's easy to change the file's name at copy/move time. However, S3 doesn't offer a move operation, perhaps because a move using this scheme won't be idempotent. Copy/move Idea 2: POST to parent folder. This is similar to the Google Docs copy. A POST to the destination folder with XML content describing the source file: POST /folders/42/contents ... <source>/files/5</source> <newName>foo</newName> I might be able to POST to the file's new URL to change its name..? Otherwise I'm stuck with specifying a new name in the XML content, which amplifies the RPCness of this idea. It's also not as consistent with the upload operation as idea 1. Ultimately I'm looking for something that's easy to use and understand, so in addition to criticism of the above, new ideas are certainly welcome!

    Read the article

  • Android NDK import-module / code reuse

    - by Graeme
    Morning! I've created a small NDK project which allows dynamic serialisation of objects between Java and C++ through JNI. The logic works like this: Bean - JavaCInterface.Java - JavaCInterface.cpp - JavaCInterface.java - Bean The problem is I want to use this functionality in other projects. I separated out the test code from the project and created a "Tester" project. The tester project sends a Java object through to C++ which then echo's it back to the Java layer. I thought linking would be pretty simple - ("Simple" in terms of NDK/JNI is usually a day of frustration) I added the JNIBridge project as a source project and including the following lines to Android.mk: NDK_MODULE_PATH=.../JNIBridge/jni/" JNIBridge/jni/JavaCInterface/Android.mk: ... include $(BUILD_STATIC_LIBRARY) JNITester/jni/Android.mk: ... include $(BUILD_SHARED_LIBRARY) $(call import-module, JavaCInterface) This all works fine. The C++ files which rely on headers from JavaCInterface module work fine. Also the Java classes can happily use interfaces from JNIBridge project. All the linking is happy. Unfortunately JavaCInterface.java which contains the native method calls cannot see the JNI method located in the static library. (Logically they are in the same project but both are imported into the project where you wish to use them through the above mechanism). My current solutions are are follows. I'm hoping someone can suggest something that will preserve the modular nature of what I'm trying to achieve: My current solution would be to include the JavaCInterface cpp files in the calling project like so: LOCAL_SRC_FILES := FunctionTable.cpp $(PATH_TO_SHARED_PROJECT)/JavaCInterface.cpp But I'd rather not do this as it would lead to me needing to update each depending project if I changed the JavaCInterface architecture. I could create a new set of JNI method signatures in each local project which then link to the imported modules. Again, this binds the implementations too tightly.

    Read the article

  • How to merge objects in php ?

    - by The Devil
    Hey everybody, I'm currently re-writing a class which handles xml files. Depending on the xml file and it's structure I sometimes need to merge objects. Lets say once I have this: <page name="a title"/> And another time I have this: <page name="a title"> <permission>administrator</permission> </page> Before, I needed only the attributes from the "page" element. That's why a lot of my code expects an object containing only the attributes ($loadedXml-attributes()). Now there are xml files in which the <permission> element is required. I did manage to merge the objects (though not as I wanted) but I can't get to access one of them (most probably it's something I'm missing). To merge my objects I used this code: (object) array_merge( (array) $loadedXml->attributes(), (array) $loadedXml->children() ); This is what I get from print_r(): stdClass Object ( [@attributes] => Array ( [name] => a title ) [permission] => Array ( [0] => administrator ) ) So now my question is how to access the @attributes method ? Thanks in advance, The Devil

    Read the article

  • smlnj rephrased question for listdir(filename, directoryname)

    - by czy1985
    i am a newbie learning sml and the question i am thrown with involves IO functions that i have no idea how it works even after reading it. Here is the 2 questions that i really need help with to get me started, please provide me with codings and some explaination, i will be able to trial and error with the code given for the other questions. Q1) listdir(filename,directoryname), which given the name of a directory, list its contents in a text file. The listing is in a form that makes it easy to seperate filenames, dates and sizes from each other. (similar to what msdos does with "dir" but instead of just listing it out, it places all the files and details into a text file. Q2) readlist(filename) which reads a list of filenames (each of which were produced by listdir in (Q1) and combines them into one large list. (reads from the text file in Q1 and then assigning the contents into 1 big list containing all the information) Thing is, i only learned from the lecturer in school on the introduction section, there isnt even a system input or output example shown, not even the "use file" function is taught. if anyone that knows sml sees this, please help. Thanks to anyone who took the effort helping me. Thanks for the reply, current I am using SMLNJ to try and do this. Basically, Q1 requires me to list the directory's files of the "directoryname" provided into a text file in "filename". The Q2 requires me to read from the "filename" text file and then place the contents into one large list. Duplicate of: smlnj listdir

    Read the article

  • php automatically commented with apache

    - by clement
    We have installed apache 2.2, and activeperl to run bugzilla, all that on a Windows Server 2003. Here We want to install PHP on the server to install a wiki. I followed those steps: tutorial to install PHP and enable it from Apache. After all those steps, I restart couples of times, and When I try a simple phpinfo() on PHP, the whole PHP code is commented: < ! - - ?php phpinfo(); ? - - Now, the httpd.conf was already edited for the PERL and it can be those edits that make the mistake. Here is the whole httpd.conf file: ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 6969 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule php5_module "c:/php/php5apache2_2.dll" LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so User daemon Group daemon ServerAdmin [email protected] DocumentRoot C:/bugzilla-4.4.2/ Options FollowSymLinks AllowOverride None Order deny,allow Deny from all Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all ScriptInterpreterSource Registry-Strict DirectoryIndex index.html index.html.var index.cgi index.php Order allow,deny Deny from all Satisfy All ErrorLog "logs/error.log" LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all DefaultType text/plain AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi AddType application/x-httpd-php .php SSLRandomSeed startup builtin SSLRandomSeed connect builtin PHPIniDir "c:/php"

    Read the article

  • segmentation fault in file operations in c

    - by mekasperasky
    #include<stdio.h> /* this is a lexer which recognizes constants , variables ,symbols, identifiers , functions , comments and also header files . It stores the lexemes in 3 different files . One file contains all the headers and the comments . Another file will contain all the variables , another will contain all the symbols. */ int main() { int i; char a,b[20],c; FILE *fp1; fp1=fopen("source.txt","r"); //the source file is opened in read only mode which will passed through the lexer //now lets remove all the white spaces and store the rest of the words in a file if(fp1==NULL) { perror("failed to open source.txt"); //return EXIT_FAILURE; } i=0; while(1) { a=fgetc(fp1); if(a !="") { b[i]=a; } else { fprintf(fp1, "%.20s\n", b); i=0; continue; } i=i+1; /*Switch(a) { case EOF :return eof; case '+':sym=sym+1; case '-':sym=sym+1; case '*':sym=sym+1; case '/':sym=sym+1; case '%':sym=sym+1; case ' */ } return 0; } how does this code end up in segmentation fault?

    Read the article

  • Parse a CSV file using python (to make a decision tree later)

    - by Margaret
    First off, full disclosure: This is going towards a uni assignment, so I don't want to receive code. :). I'm more looking for approaches; I'm very new to python, having read a book but not yet written any code. The entire task is to import the contents of a CSV file, create a decision tree from the contents of the CSV file (using the ID3 algorithm), and then parse a second CSV file to run against the tree. There's a big (understandable) preference to have it capable of dealing with different CSV files (I asked if we were allowed to hard code the column names, mostly to eliminate it as a possibility, and the answer was no). The CSV files are in a fairly standard format; the header row is marked with a # then the column names are displayed, and every row after that is a simple series of values. Example: # Column1, Column2, Column3, Column4 Value01, Value02, Value03, Value04 Value11, Value12, Value13, Value14 At the moment, I'm trying to work out the first part: parsing the CSV. To make the decisions for the decision tree, a dictionary structure seems like it's going to be the most logical; so I was thinking of doing something along these lines: Read in each line, character by character If the character is not a comma or a space Append character to temporary string If the character is a comma Append the temporary string to a list Empty string Once a line has been read Create a dictionary using the header row as the key (somehow!) Append that dictionary to a list However, if I do things that way, I'm not sure how to make a mapping between the keys and the values. I'm also wondering whether there is some way to perform an action on every dictionary in a list, since I'll need to be doing things to the effect of "Everyone return their values for columns Column1 and Column4, so I can count up who has what!" - I assume that there is some mechanism, but I don't think I know how to do it. Is a dictionary the best way to do it? Would I be better off doing things using some other data structure? If so, what?

    Read the article

  • Proper way of naming your Java Google App Engine Project

    - by Saif Bechan
    I am starting out with Google's App Engine in Java. I have seen the tutorial video but I do not understand the naming of the project package. It is going to be a guestbook, that's why the name is guestbook, I understand that part. But after that I see package name. 1)Is that something you import into the project, or is is something you create. I have seen this a lot in projects, something like com.xxx.xxx. 2)How do you name this type of thing or is this an import. I have looked at another tutorial there they take the naming to a whole new level. The name of both the project and the package is de.vogella.gae.java.todo. 3)What does this mean in java terms. 4)Maybe one of you can help me with this specific project I want to start. I want to create a Google App project that for now only serves static files. I will leave the project empty and just put all my static files in the war directory of the project. I want the domain name to be mydomainstatic

    Read the article

  • response.write only working IE for ASP.NET

    - by slowlycooked
    I'm using uploadify (http://www.uploadify.com/) to upload video to my site then convert them into *.flv using ffmpeg and play preview. But it dosen't fully working with firefox, chrome or safari. uploadify provides a onComplete interface, so when the script (.ashx, .php) used on your site for saving uploaded files. you can use response.write("blabla") or (echo "blabla") to invoke the javascript function that registed as OnComplete. i have test with few video files like avi, mpg, mp4, they are less then 50mb,and they all worked with all 4 browsers. However, when i was trying to upload a 75mb mp4 file, it worked in IE, but didn't working in other three. I can see the .flv file has been create in the upload folder, i can see debug messsage output after response.write("blabla"), but the javascript function was not invoked. i.e. the preview didn't play. anyone knows why? is there a timeout or something on response.write so after a period of time it wont work? e.g. 75mb file took longer time to convert than other smaller size file i tried. thansk

    Read the article

  • Difficulty porting raw PCM output code from Java to Android AudioTrack API.

    - by IndigoParadox
    I'm attempting to port an application that plays chiptunes (NSF, SPC, etc) music files from Java SE to Android. The Android API seems to lack the javax multimedia classes that this application uses to output raw PCM audio. The closest analog I've found in the API is AudioTrack and so I've been wrestling with that. However, when I try to run one of my sample music files through my port-in-progress, all I get back is static. My suspicion is that it's the AudioTrack I've setup which is at fault. I've tried various different constructors but it all just outputs static in the end. The DataLine setup in the original code is something like: AudioFormat audioFormat = new AudioFormat( AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, true ); DataLine.Info lineInfo = new DataLine.Info( SourceDataLine.class, audioFormat ); DataLine line = (SourceDataLine)AudioSystem.getLine( lineInfo ); The constructor I'm using right now is: AudioTrack = new AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT, AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT ), AudioTrack.MODE_STREAM ); I've replaced constants and variables in those so they make sense as concisely as possible, but my basic question is if there are any obvious problems in the assumptions I made when going from one format to the other.

    Read the article

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

  • Using mod-rewrite to conditionally select existing file in a subdirectory based on Host header?

    - by Kevin Hakanson
    I'm working through a problem where I want to select a different static content file based on the incoming Host header. The simple example is a mapping from URLs to files like this: www.example.com/images/logo.gif - \images\logo.gif skin2.example.com/images/logo.gif - \images\skin2\logo.gif skin3.example.com/images/logo.gif - \images\skin3logo.gif I have this working with the following RewriteRules, but I don't like how I have to repeat myself so much. Each host has the same set of rules, and each RewriteCond and RewriteRule has the same path. I'd like to use the RewriteMap, but I don't know how to use it to map the %{HTTP_HOST} to the path. <VirtualHost *:80> DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs" ServerName www.example.com ServerAlias skin2.example.com ServerAlias skin3.example.com RewriteEngine On RewriteCond %{HTTP_HOST} skin2.example.com RewriteCond %{DOCUMENT_ROOT}$1/skin2/$2 -f RewriteRule ^(.*)/(.*) $1/skin2/$2 [L] RewriteCond %{HTTP_HOST} skin3.example.com RewriteCond %{DOCUMENT_ROOT}$1/skin3/$2 -f RewriteRule ^(.*)/(.*) $1/skin3/$2 [L] </VirtualHost> The concept behind the rules is if the same filename exists in a subdirectory for that host, use it instead of the direct targeted file. This uses host based subdirectories at the lowest level, and not a top level subdirectory to separate content.

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • What does it mean to double license?

    - by Adrian Panasiuk
    What does it mean to double license code? I can't just put both licenses in the source files. That would mean that I mandate users to follow the rules of both of them, but the licenses will probably be contradictory (otherwise there'd be no reason to double license). I guess this is something like in cryptographic chaining, cipher = crypt_2(crypt_1(clear)) (generally) means, that cipher is neither the output of crypt_2 on clear nor the output of crypt_1 on clear. It's the output of the composition. Likewise, in double-licensing, in reality my code has one license, it's just that this new license says please follow all of the rules of license1, or all of the rules of license2, and you are hereby granted the right to redistribute this application under this "double" license, license1 or license2, or any license under which license1 or license2 allow you to redistribute this software, in which case you shall replace the relevant licensing information in this application with that of the new license. (Does this mean that before someone may use the app under license1, he has to perform the operation of redistributing to self? How would he document the fact that he did that operation?) Am I correct. What LICENSE file and what text to put in the source files would I need if I wanted to double license on, for the sake of example, Apachev2 and GPLv3 ?

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • Install h5py in Mac OS X 10.6.3

    - by zyq524
    I'm trying to install h5py in Mac OS X 10.6.3. First I installed HDF5 1.8, which used the following commands: ./configure \ --prefix=/Library/Frameworks/Python.framework/Versions/Current \ --enable-shared \ --enable-production \ --enable-threadsafe \ CPPFLAGS=-I/Library/Frameworks/Python.framework/Versions/Current/include \ LDFLAGS=-L/Library/Frameworks/Python.framework/Versions/Current/lib make make check sudo make install Then install h5py: /Library/Frameworks/Python.framework/Versions/Current/bin/python \ setup.py \ build \ --api=18 \ --hdf5=/Library/Frameworks/Python.framework/Versions/Current Then I got the errors: Configure: Autodetecting HDF5 settings... Custom HDF5 dir: /Library/Frameworks/Python.framework/Versions/Current Custom API level: (1, 8) ld: warning: in detect/vers.o, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Library/Frameworks/Python.framework/Versions/Current/lib/libhdf5.dylib, file was built for unsupported file format which is not the architecture being linked (i386) Undefined symbols: "_main", referenced from: start in crt1.10.5.o ld: symbol(s) not found collect2: ld returned 1 exit status Failed to compile HDF5 test program. Please check to make sure: * You have a C compiler installed * A development version of Python is installed (including header files) * A development version of HDF5 is installed (including header files) * If HDF5 is not in a default location, supply the argument --hdf5=<path> error: command 'cc' failed with exit status 1 I just updated my Xcode, I don't know whether this is because my gcc's default setting. If so, how can I get rid of this error? Thanks.

    Read the article

< Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >