Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 992/1981 | < Previous Page | 988 989 990 991 992 993 994 995 996 997 998 999  | Next Page >

  • Proper way of naming your Java Google App Engine Project

    - by Saif Bechan
    I am starting out with Google's App Engine in Java. I have seen the tutorial video but I do not understand the naming of the project package. It is going to be a guestbook, that's why the name is guestbook, I understand that part. But after that I see package name. 1)Is that something you import into the project, or is is something you create. I have seen this a lot in projects, something like com.xxx.xxx. 2)How do you name this type of thing or is this an import. I have looked at another tutorial there they take the naming to a whole new level. The name of both the project and the package is de.vogella.gae.java.todo. 3)What does this mean in java terms. 4)Maybe one of you can help me with this specific project I want to start. I want to create a Google App project that for now only serves static files. I will leave the project empty and just put all my static files in the war directory of the project. I want the domain name to be mydomainstatic

    Read the article

  • Windows Azure - Automatic Load Balancing - partitioning

    - by veda
    I was going through some videos. I found that Windows Azure will group the blobs into partitions based on the partition key and will Automatically Load Balance these partitions on their servers. The partition key for a blob is blob name. Using the blob name, azure will automatically do partitions. Now, My question is that Can I able to make the azure to do partitions based on the Container Name. I wanted my partition key to be container name. For example, I have a storage account. In that I have 2 containers named container1 and container2. In container1, I have 1000 files named 1.txt, 2.txt, 3.txt, ......., 501.txt, 502.txt, ..... 999.txt, 1000.txt and in container2, I have another 1000 files named 1001.txt, 1002.txt, 1003.txt, ......., 1501.txt, 1502.txt, ..... 1999.txt, 2000.txt Now, Will Windows Azure will generate 2000 partitions based on the blob name and serve me through several servers??? Won't it be better if Azure partitions based on the Container name? container1 on one server and conatiner2 on another.

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

  • What does it mean to double license?

    - by Adrian Panasiuk
    What does it mean to double license code? I can't just put both licenses in the source files. That would mean that I mandate users to follow the rules of both of them, but the licenses will probably be contradictory (otherwise there'd be no reason to double license). I guess this is something like in cryptographic chaining, cipher = crypt_2(crypt_1(clear)) (generally) means, that cipher is neither the output of crypt_2 on clear nor the output of crypt_1 on clear. It's the output of the composition. Likewise, in double-licensing, in reality my code has one license, it's just that this new license says please follow all of the rules of license1, or all of the rules of license2, and you are hereby granted the right to redistribute this application under this "double" license, license1 or license2, or any license under which license1 or license2 allow you to redistribute this software, in which case you shall replace the relevant licensing information in this application with that of the new license. (Does this mean that before someone may use the app under license1, he has to perform the operation of redistributing to self? How would he document the fact that he did that operation?) Am I correct. What LICENSE file and what text to put in the source files would I need if I wanted to double license on, for the sake of example, Apachev2 and GPLv3 ?

    Read the article

  • Using mod-rewrite to conditionally select existing file in a subdirectory based on Host header?

    - by Kevin Hakanson
    I'm working through a problem where I want to select a different static content file based on the incoming Host header. The simple example is a mapping from URLs to files like this: www.example.com/images/logo.gif - \images\logo.gif skin2.example.com/images/logo.gif - \images\skin2\logo.gif skin3.example.com/images/logo.gif - \images\skin3logo.gif I have this working with the following RewriteRules, but I don't like how I have to repeat myself so much. Each host has the same set of rules, and each RewriteCond and RewriteRule has the same path. I'd like to use the RewriteMap, but I don't know how to use it to map the %{HTTP_HOST} to the path. <VirtualHost *:80> DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs" ServerName www.example.com ServerAlias skin2.example.com ServerAlias skin3.example.com RewriteEngine On RewriteCond %{HTTP_HOST} skin2.example.com RewriteCond %{DOCUMENT_ROOT}$1/skin2/$2 -f RewriteRule ^(.*)/(.*) $1/skin2/$2 [L] RewriteCond %{HTTP_HOST} skin3.example.com RewriteCond %{DOCUMENT_ROOT}$1/skin3/$2 -f RewriteRule ^(.*)/(.*) $1/skin3/$2 [L] </VirtualHost> The concept behind the rules is if the same filename exists in a subdirectory for that host, use it instead of the direct targeted file. This uses host based subdirectories at the lowest level, and not a top level subdirectory to separate content.

    Read the article

  • php automatically commented with apache

    - by clement
    We have installed apache 2.2, and activeperl to run bugzilla, all that on a Windows Server 2003. Here We want to install PHP on the server to install a wiki. I followed those steps: tutorial to install PHP and enable it from Apache. After all those steps, I restart couples of times, and When I try a simple phpinfo() on PHP, the whole PHP code is commented: < ! - - ?php phpinfo(); ? - - Now, the httpd.conf was already edited for the PERL and it can be those edits that make the mistake. Here is the whole httpd.conf file: ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 6969 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule php5_module "c:/php/php5apache2_2.dll" LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so User daemon Group daemon ServerAdmin [email protected] DocumentRoot C:/bugzilla-4.4.2/ Options FollowSymLinks AllowOverride None Order deny,allow Deny from all Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all ScriptInterpreterSource Registry-Strict DirectoryIndex index.html index.html.var index.cgi index.php Order allow,deny Deny from all Satisfy All ErrorLog "logs/error.log" LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all DefaultType text/plain AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi AddType application/x-httpd-php .php SSLRandomSeed startup builtin SSLRandomSeed connect builtin PHPIniDir "c:/php"

    Read the article

  • Best XML format for log events in terms of tool support for data mining and visualization?

    - by Thorbjørn Ravn Andersen
    We want to be able to create log files from our Java application which is suited for later processing by tools to help investigate bugs and gather performance statistics. Currently we use the traditional "log stuff which may or may not be flattened into text form and appended to a log file", but this works the best for small amounts of information read by a human. After careful consideration the best bet has been to store the log events as XML snippets in text files (which is then treated like any other log file), and then download them to the machine with the appropriate tool for post processing. I'd like to use as widely supported an XML format as possible, and right now I am in the "research-then-make-decision" phase. I'd appreciate any help both in terms of XML format and tools and I'd be happy to write glue code to get what I need. What I've found so far: log4j XML format: Supported by chainsaw and Vigilog. Lilith XML format: Supported by Lilith Uninvestigated tools: Microsoft Log Parser: Apparently supports XML. OS X log viewer: plus there is a lot of tools on http://www.loganalysis.org/sections/parsing/generic-log-parsers/ Any suggestions?

    Read the article

  • [WEB] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess (thanks ILMV:]) up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • What's the most efficient way to setup a multi-lingual website

    - by Jasper De Bruijn
    Hi, I'm developing a website that will be available in different languages. It is a LAMP (Linux, Apache, MySQL, PHP) setup, and it makes use of Smarty, mostly for the template engine. The way we currently translate is by a self-written smarty plugin, which will recognize certain tags in the HTML files, and will find the corresponding tag in an earlier defined language file. The HTML could look as follows: <p>Hi, welcome to $#gamedesc;!</p> And the language file could look like this: gamedesc:Poing 2009$; welcome:this is another tag$; Which would then output <p>Hi, welcome to Poing 2009!</p> This system is very basic, but it is pretty hard to control, if I f.e. would like to keep track of what has been translated so far, or give certain users the rights to translate only certain tags. I've been looking at some alternative ways to approach this, by either replacing the text-file with XML files which could store some extra meta-data, or by perhaps storing all the texts in the database, and retrieving it there. My question is, what would be the best way to make this system both maintainable and perform well with high user-traffic? Are there perhaps any (lightweight) plugins I could take a look at?

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • Cheap cloning/local branching in Mercurial

    - by Zack
    Hi, Just started working with Mercurial a few days ago and there's something I don't understand. I have an experimental thing I want to do, so the normal thing to do would be to clone my repository, work on the clone and if eventually I want to keep those changes, I'll push them to my main repository. Problem is cloning my repository takes alot of time (we have alot of code) and just compiling the cloned copy would take up to an hour. So I need to somehow work on a different repository but still in my original working copy. Enter local branches. Problem is just creating a local branch takes forever, and working with them isn't all that fun either. Because when moving between local branches doesn't "revert" to the target branch state, I have to issue a hg purge (to remove files that were added in the moved from branch) and then hg update -c (to revert modified files in the moved from branch). (note: I did try PK11 fork of local branch extension, it a simple local branch creation crashes with an exception) At the end of the day, this is just too complex. What are my options?

    Read the article

  • Internationalizing a Python 2.6 application via Babel

    - by Malcolm
    We're evaluating Babel 0.9.5 [1] under Windows for use with Python 2.6 and have the following questions that we we've been unable to answer through reading the documentation or googling. 1) I would like to use an _ like abbreviation for ungettext. Is there a concencus on whether one should use n_ or N_ for this? n_ does not appear to work. Babel does not extract text. N_ appears to partially work. Babel extracts text like it does for gettext, but does not format for ngettext (missing plural argument and msgstr[ n ].) 2) Is there a way to set the initial msgstr fields like the following when creating a POT file? I suspect there may be a way to do this via Babel cfg files, but I've been unable to find documentation on the Babel cfg file format. "Project-Id-Version: PROJECT VERSION\n" "Language-Team: en_US \n" 3) Is there a way to preserve 'obsolete' msgid/msgstr's in our PO files? When I use the Babel update command, newly created obsolete strings are marked with #~ prefixes, but existing obsolete message strings get deleted. Thanks, Malcolm [1] http://babel.edgewall.org/

    Read the article

  • Reading/Writing DataTables to and from an OleDb Database LINQ

    - by jsmith
    My current project is to take information from an OleDbDatabase and .CSV files and place it all into a larger OleDbDatabase. I have currently read in all the information I need from both .CSV files, and the OleDbDatabase into DataTables.... Where it is getting hairy is writing all of the information back to another OleDbDatabase. Right now my current method is to do something like this: OleDbTransaction myTransaction = null; try { OleDbConnection conn = new OleDbConnection("PROVIDER=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Database); conn.Open(); OleDbCommand command = conn.CreateCommand(); string strSQL; command.Transaction = myTransaction; strSQL = "Insert into TABLE " + "(FirstName, LastName) values ('" + FirstName + "', '" + LastName + "')"; command.CommandType = CommandType.Text; command.CommandText = strSQL; command.ExecuteNonQuery(); conn.close(); catch (Exception) { // IF invalid data is entered, rolls back the database myTransaction.Rollback(); } Of course, this is very basic and I'm using an SQL command to commit my transactions to a connection. My problem is I could do this, but I have about 200 fields that need inserted over several tables. I'm willing to do the leg work if that's the only way to go. But I feel like there is an easier method. Is there anything in LINQ that could help me out with this?

    Read the article

  • fastest way to crawl recursive ntfs directories in C++

    - by Peter Parker
    I have written a small crawler to scan and resort directory structures. It based on dirent(which is a small wrapper around FindNextFileA) In my first benchmarks it is surprisingy slow: around 123473ms for 4500 files(thinkpad t60p local samsung 320 GB 2.5" HD). 121481 files found in 123473 milliseconds Is this speed normal? This is my code: int testPrintDir(std::string strDir, std::string strPattern="*", bool recurse=true){ struct dirent *ent; DIR *dir; dir = opendir (strDir.c_str()); int retVal = 0; if (dir != NULL) { while ((ent = readdir (dir)) != NULL) { if (strcmp(ent->d_name, ".") !=0 && strcmp(ent->d_name, "..") !=0){ std::string strFullName = strDir +"\\"+std::string(ent->d_name); std::string strType = "N/A"; bool isDir = (ent->data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) !=0; strType = (isDir)?"DIR":"FILE"; if ((!isDir)){ //printf ("%s <%s>\n", strFullName.c_str(),strType.c_str());//ent->d_name); retVal++; } if (isDir && recurse){ retVal += testPrintDir(strFullName, strPattern, recurse); } } } closedir (dir); return retVal; } else { /* could not open directory */ perror ("DIR NOT FOUND!"); return -1; } }

    Read the article

  • response.write only working IE for ASP.NET

    - by slowlycooked
    I'm using uploadify (http://www.uploadify.com/) to upload video to my site then convert them into *.flv using ffmpeg and play preview. But it dosen't fully working with firefox, chrome or safari. uploadify provides a onComplete interface, so when the script (.ashx, .php) used on your site for saving uploaded files. you can use response.write("blabla") or (echo "blabla") to invoke the javascript function that registed as OnComplete. i have test with few video files like avi, mpg, mp4, they are less then 50mb,and they all worked with all 4 browsers. However, when i was trying to upload a 75mb mp4 file, it worked in IE, but didn't working in other three. I can see the .flv file has been create in the upload folder, i can see debug messsage output after response.write("blabla"), but the javascript function was not invoked. i.e. the preview didn't play. anyone knows why? is there a timeout or something on response.write so after a period of time it wont work? e.g. 75mb file took longer time to convert than other smaller size file i tried. thansk

    Read the article

  • How do I test is storage-conf is being loaded in Cassandra 0.7.3?

    - by user657253
    I have installed Cassandra and gotten it working on two machines. I have followed the instructions to hook them up to each other by configuring the storage-conf.xml files. Both machines respond well to thrift and to command line cassandra. This is tutorial I used to setup the storage-conf.xml files. The tutorial says that if I run netstat, I should NOT see Cassandra bound to 127.0.0.1 on my seed node. I should see it bound to my internal IP, which I have configured in the storage-conf.xml file. I have rebooted the servers and relaunched cassandra. Still, I see the localhost address insead of the correct internal IP address. Is it that my .yaml file is overriding the storage-conf.xml file? If so, how do I delete the appropriate things in the .yaml? Or how do I tell Cassandra to look for my storage-conf.xml file? A few things I have tried: renaming the cassandra.yaml file. What happens is that cassandra will not load. If i rename the storage-conf.xml, cassandra does load. When I installed Cassandra, it did not come with a storage-conf.xml file. I had to grab it off the apache wiki.

    Read the article

  • Threading.Timer invokes asynchronously many methods

    - by Dimitar
    Hi guys! Please help! I call a threading.timer from global.asax which invokes many methods each of which gets data from different services and writes it to files. My question is how do i make the methods to be invoked on a regular basis let's say 5 mins? What i do is: in Global.asax I declare a timer protected void Application_Start() { TimerCallback timerDelegate = new TimerCallback(myMainMethod); Timer mytimer = new Timer(timerDelegate, null, 0, 300000); Application.Add("timer", mytimer); } the declaration of myMainMethod looks like this: public static void myMainMethod(object obj) { MyDelegateType d1 = new MyDelegateType(getandwriteServiceData1); d1.BeginInvoke(null, null); MyDelegateType d2 = new MyDelegateType(getandwriteServiceData2); d2.BeginInvoke(null, null); } this approach works fine but it invokes myMainMethod every 5 mins. What I need is the method to be invoked 5 mins after all the data is retreaved and written to files on the server. How do I do that?

    Read the article

  • Decrypting a string in C# 3.5 which was encrypted with openssl in php 5.3.2

    - by panny
    Hi everyone, maybe someone can clear me up. I have been surfing on this a while now. I used openssl from console to create a root certificate for me (privatekey.pem, publickey.pem, mycert.pem, mycertprivatekey.pfx). See the end of this text on how. The problem is still to get a string encrypted on the PHP side to be decrypted on the C# side with RSACryptoServiceProvider. Any ideas? PHP side I used the publickey.pem to read it into php: $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate"

    Read the article

  • Python: Serial Transmission

    - by Silent Elektron
    I have an image stack of 500 images (jpeg) of 640x480. I intend to make 500 pixels (1st pixels of all images) as a list and then send that via COM1 to FPGA where I do my further processing. I have a couple of questions here: How do I import all the 500 images at a time into python and how do i store it? How do I send the 500 pixel list via COM1 to FPGA? I tried the following: Converted the jpeg image to intensity values (each pixel is denoted by a number between 0 and 255) in MATLAB, saved the intensity values in a text file, read that file using readlines(). But it became too cumbersome to make the intensity value files for all the 500 images! Used NumPy to put the read files in a matrix and then pick the first pixel of all images. But when I send it, its coming like: [56, 61, 78, ... ,71, 91]. Is there a way to eliminate the [ ] and , while sending the data serially? Thanks in Advance! :)

    Read the article

  • How to merge objects in php ?

    - by The Devil
    Hey everybody, I'm currently re-writing a class which handles xml files. Depending on the xml file and it's structure I sometimes need to merge objects. Lets say once I have this: <page name="a title"/> And another time I have this: <page name="a title"> <permission>administrator</permission> </page> Before, I needed only the attributes from the "page" element. That's why a lot of my code expects an object containing only the attributes ($loadedXml-attributes()). Now there are xml files in which the <permission> element is required. I did manage to merge the objects (though not as I wanted) but I can't get to access one of them (most probably it's something I'm missing). To merge my objects I used this code: (object) array_merge( (array) $loadedXml->attributes(), (array) $loadedXml->children() ); This is what I get from print_r(): stdClass Object ( [@attributes] => Array ( [name] => a title ) [permission] => Array ( [0] => administrator ) ) So now my question is how to access the @attributes method ? Thanks in advance, The Devil

    Read the article

  • Powershell scripts to backup SQL, SVN

    - by bszom
    I'm trying to use PowerShell to create some backups, and then to copy these to a web folder (or, in other words, upload them to a WebDAV share). At first I thought I'd do the WebDAV stuff from within PowerShell, but it seems this still requires a fair amount of "manual labour", ie: constructing HTTP requests. I then settled for creating a web folder from the script and letting Windows handle the WebDAV stuff. It seems that all it takes to create a web folder is to create a standard shortcut, as described here. What I can't figure out is how to actually copy files to the shortcut's target..? Maybe I'm going about this the wrong way. It would be ideal if I could somehow encrypt the credentials for the WebDAV in the script, then have it create the web folder, shunt over the files, and delete the web folder again. Or even better, not use a web folder at all. Third option would be to just create the web folder manually and leave it there, though I'd rather not. Any ideas/pointers/tips? :)

    Read the article

  • Ruby : UTF-8 IO

    - by subtenante
    I use ruby 1.8.7. I try to parse some text files containing greek sentences, encoded in UTF-8. (I can't much paste here sample files, because they are subject to copyright. Really just some greek text encoded in UTF-8.) I want, for each file, to parse the file, extract all the words, and make a list of each new word found in this file. All that saved to one big index file. Here is my code : #!/usr/bin/ruby -KU def prepare_line(l) l.gsub(/^\s*[ST]\d+\s*:\s*|\s+$|\(\d+\)\s*/u, "") end def tokenize(l) l.split /['·.;!:\s]+/u end $dict = {} $cpt = 0 $out = File.new 'out.txt', 'w' def lesson(file) $cpt = $cpt + 1 file.readlines.each do |l| $out.puts l l = prepare_line l tokenize(l).each do |t| unless $dict[t] $dict[t] = $cpt $out.puts " #{t}\n" end end end end Dir.new('etc/').each do |filename| f = File.new("etc/#{filename}") unless File.directory? f lesson f end end Here is part of my output : ?@???†?†?????????? ?...[snip very long hangul/hanzi mishmash]... ????????†? ???N2 : ?e?te?? (2) µ???µa (Note that the puts l part seems to work fine, at the end of the given output line.) Any idea what is wrong with my code ? (General comments about ruby idioms I could use are very welcome, I'm really a beginner.)

    Read the article

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

< Previous Page | 988 989 990 991 992 993 994 995 996 997 998 999  | Next Page >