Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 304/3118 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • C++: Best text accumulator

    - by MInner
    Text gets accumulates piecemeal before being sent to client. Now we use own class that allocates memory for each piece as char massive. (Anyway, works like char[][] + std::list<char*>). Then we build the whole string, convert it into std::sting and then create boost::asio::streambuf using it. That's slow enough, I assume. Correct me if I'm wrong. I know, in many cases simple FILE type from stdio.h is used. How does it works? Allocates memory at every write into it. So, is it faster and is there any way to read into boost::asio::streambuf from FILE?

    Read the article

  • html5 uploader + jquery drag & drop: how to store file data with FormData?

    - by lauthiamkok
    I am making a html5 drag and drop uploader with jquery, below is my code so far, the problem is that I get an empty array without any data. Is this line incorrect to store the file data - fd.append('file', $thisfile);? $('#div').on( 'dragover', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'dragenter', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'drop', function(e){ if(e.originalEvent.dataTransfer){ if(e.originalEvent.dataTransfer.files.length) { e.preventDefault(); e.stopPropagation(); // The file list. var fileList = e.originalEvent.dataTransfer.files; //console.log(fileList); // Loop the ajax post. for (var i = 0; i < fileList.length; i++) { var $thisfile = fileList[i]; console.log($thisfile); // HTML5 form data object. var fd = new FormData(); //console.log(fd); fd.append('file', $thisfile); /* var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }) */ $.ajax({ url: "upload.php", type: "POST", data: fd, processData: false, contentType: false, success: function(response) { // .. do something }, error: function(jqXHR, textStatus, errorMessage) { console.log(errorMessage); // Optional } }); } /*UPLOAD FILES HERE*/ upload(e.originalEvent.dataTransfer.files); } } } ); function upload(files){ console.log('Upload '+files.length+' File(s).'); }; then if I use another method is that to make the file data into an array inside the jquery code, var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }); but where is the tmp_name data inside e.originalEvent.dataTransfer.files[i]? php, print_r($_POST); $uploaddir = './uploads/'; $file = $uploaddir . basename($_POST['file']['name']); if (move_uploaded_file($_POST['file']['tmp_name'], $file)) { echo "success"; } else { echo "error"; } as you can see that tmp_name is needed to upload the file via php... html, <div id="div">Drop here</div>

    Read the article

  • Accessing Password Protected Network Drives in Windows in C#?

    - by tkeE2036
    Hi Everyone, So in C# I am trying to access a file on a network, for example at "//applications/myapp/test.txt", as follows: const string fileLocation = @"//applications/myapp/test.txt"; using (StreamReader fin = new StreamReader(FileLocation)) { while(!fin.EndOfStream()){ //Do some cool stuff with file } } However I get the following error: System.IO.IOException : Logon failure: unknown user name or bad password. I figure its because I need to supply some network credentials but I'm not sure how to get those to work in this situation. Does anyone know the best way (or any way) to gain access to these files that are on a a password protected location? Thanks in advance!!

    Read the article

  • Determining idle network transfer bandwidth

    - by rwmnau
    I'm building an application that will move around some potentially large files, but I want to do it without disturbing the user's network connection by flooding it. I know that Windows BITS has this kind of functionality, and that's essentially what I'm looking to replicate (as far as the throttling goes). I know BITS has other functionality as well that I'm not interested in, and I also have the option to consume it from .NET, but I'm interested in how it works. I've looked online, and I haven't found a clear explanation of how exactly BITS determines how much bandwidth to consume, aside from a vague "BITS polls activity to watch for a drop in the bandwidth used by other programs." What does this mean? Bandwidth consumed by other programs can drop for a number of other reasons as well - can BITS tell the difference? If I was looking for a process that replicated this "stay just under the radar, where the user won't notice the transfers" functionality, how would I go about doing it?

    Read the article

  • How to retrieve files/documents that are not found on the web server machine.

    - by jhorton
    I am trying to create an export feature for a user to be able to download documents into a zip file. I have the feature working when the files are located on my local and I can use an absolute path on my local. But after talking to the infrastructure team, I found out that the documents are not stored on the same machine as the web server but located at a server farm located off site. I can query the database which gives me a file path. But the path is more of a relative path. So can anyone help me understand how to use FileInfo with getting files from another machine. I believe the infrastructure team said there is a virtual drive set up to the outside server. Am I able to use a virtual path some how? Thanks.

    Read the article

  • Client validation of INPUT of type FILE without postback using jQuery

    - by Fixer
    I want to check on the client side that a file has been selected before the form can be submitted. <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm("Upload", "Files", FormMethod.Post, new { enctype = "multipart/form-data" })) { <input id="File" name="File" type="file" size="80" /> <input type="submit" name="name" value="Upload" /> } Currently this form is doing postbacks for validation. What is going wrong?

    Read the article

  • Missing archetype.xml in new archetype created from project.

    - by sdoca
    Hi, I am using Maven 2.2.1. and am new to Maven. I am trying to create an archetype based on an existing project. I have been using this blog post as a guide: http://blog.inflinx.com/category/m2eclipse/ Step 7 says "The next step is to verify that the generated archetype.xml (located under srcmainresourcesMETA-INFmaven (sic)) has all the files listed and they are located in the right directories." My generated archetype does not have an archetype.xml. It does have an archetype-metadata.xml file though. There aren't any errors in the output from calling archetype:create-from-project, just a couple of warnings about using platform encoding (Cp1252). Any thoughts on why this file wasn't generated? Thanks in advance!

    Read the article

  • sed - trying to replace first occurrence after a match

    - by wakkaluba
    I am facing a situation that drives me nuts. I am setting up an update server which uses a json file. Don't ask why or how, it sucks and is my only possibility to achieve it. I have been trying and researching for HOURS (many) because I went ballistic and wanted to crack this on my own. But I have to realize I got stuck and need help. So sorry for this chunk but I think it is somewhat important to see... The file is a one liner and repeating the following sequence with changing values (of course). "plugin_name_foo_bar": {"buildDate": "bla", "dependencies": [{"name": "bla", "optional": true, "version": "1.00"}], "developers": [{"developerId": "bla", "email": "[email protected]", "name": "Bla bla2nd"}], "excerpt": "some text {excerpt} !bla.png|thumbnail,border=1! ", "gav": "bla", "labels": ["report", "scm-related"], "name": "plugin_name_foo_bar", "previousTimestamp": "bla", "previousVersion": "1.0", "releaseTimestamp": "bla", "requiredCore": "1", "scm": "github.com", "sha1": "ynnBM2jWo25ZLDdP3ybBOnV/Pio=", "title": "bla", "url": "http://bla.org", "version": "1.0", "wiki": "https://bla.org"}, "Exclusion": {"buildDate": "bla", "dependencies": [], and the next plugin block is glued straight afterwards. What I now want to do is to search for "plugin_foo_bar": {" as this is the unique identifier for a new plugin description block. I want to replace the first sha1 value occuring afterwards. That's where I keep failing. I always grab the first,last or any occurrence in the entire file and not the block :( "title" is the unique identifier after the sha1 value. So I tried to make the .* less greedy but it ain't working out. last attempt was heading towards: sed -i 's/("name": "plugin_name_foo_bar.*sha1": ")([a-zA-Z0-9!@#\$%^&*()\[\]]*)(", "title"\)/\1blablabla\2/1' default.json to find the sha1 value of that plugin but still no joy. I hope someone knows - preferably a simpler approach - before I now continue with trial and error until I have to puke and freakout. I am working with SED on Windows, so Unix approach might help me to figure out how to achieve this in batch but please make it as one-liner if possible. Scripts are a real pain to convert. And I just need SED and no other solution with other tools like AWK. That is absolutely out of discussion. Any help is appreciated :) Cheers Jan

    Read the article

  • Java: calculate linenumber from charwise position according to the number of "\n"

    - by HH
    I know charwise positions of matches like 1 3 7 8. I need to know their corresponding line number. Example: file.txt Match: X Mathes: 1 3 7 8. Want: 1 2 4 4 $ cat file.txt X2 X 4 56XX [Added: does not notice many linewise matches, there is probably easier way to do it with stacks] $ java testt 1 2 4 $ cat testt.java import java.io.*; import java.util.*; public class testt { public static String data ="X2\nX\n4\n56XX"; public static String[] ar = data.split("\n"); public static void main(String[] args){ HashSet<Integer> hs = new HashSet<Integer>(); Integer numb = 1; for(String s : ar){ if(s.contains("X")){ hs.add(numb); numb++; }else{ numb++; } } for (Integer i : hs){ System.out.println(i); } } }

    Read the article

  • representing an XML config file with an IXmlSerializable class

    - by Sarah Vessels
    I'm writing in C# and trying to represent an XML config file through an IXmlSerializable class. I'm unsure how to represent the nested elements in my config file, though, such as logLevel: <?xml version="1.0" encoding="utf-8" ?> <configuration> <logging> <logLevel>Error</logLevel> </logging> <credentials> <user>user123</user> <host>localhost</host> <password>pass123</password> </credentials> <credentials> <user>user456</user> <host>my.other.domain.com</host> <password>pass456</password> </credentials> </configuration> There is an enum called LogLevel that represents all the possible values for the logLevel tag. The tags within credentials should all come out as strings. In my class, called DLLConfigFile, I had the following: [XmlElement(ElementName="logLevel", DataType="LogLevel")] public LogLevel LogLevel; However, this isn't going to work because <logLevel> isn't within the root node of the XML file, it's one node deeper in <logging>. How do I go about doing this? As for the <credentials> nodes, my guess is I will need a second class, say CredentialsSection, and have a property such as the following: [XmlElement(ElementName="credentials", DataType="CredentialsSection")] public CredentialsSection[] AllCredentials; Edit: okay, I tried Robert Love's suggestion and created a LoggingSection class. However, my test fails: var xs = new XmlSerializer(typeof(DLLConfigFile)); using (var stream = new FileStream(_configPath, FileMode.Open, FileAccess.Read, FileShare.Read)) { using (var streamReader = new StreamReader(stream)) { XmlReader reader = new XmlTextReader(streamReader); var file = (DLLConfigFile)xs.Deserialize(reader); Assert.IsNotNull(file); LoggingSection logging = file.Logging; Assert.IsNotNull(logging); // fails here LogLevel logLevel = logging.LogLevel; Assert.IsNotNull(logLevel); Assert.AreEqual(EXPECTED_LOG_LEVEL, logLevel); } } The config file I'm testing with definitely has <logging>. Here's what the classes look like: [Serializable] [XmlRoot("logging")] public class LoggingSection : IXmlSerializable { public XmlSchema GetSchema() { return null; } [XmlElement(ElementName="logLevel", DataType="LogLevel")] public LogLevel LogLevel; public void ReadXml(XmlReader reader) { LogLevel = (LogLevel)Enum.Parse(typeof(LogLevel), reader.ReadString()); } public void WriteXml(XmlWriter writer) { writer.WriteString(Enum.GetName(typeof(LogLevel), LogLevel)); } } [Serializable] [XmlRoot("configuration")] public class DLLConfigFile : IXmlSerializable { [XmlElement(ElementName="logging", DataType="LoggingSection")] public LoggingSection Logging; }

    Read the article

  • fprintf() within a subprogram

    - by sergio
    Im stuck when trying to write to my file within my subprogram. void new_page(float *a, float *b, float *c, int *d){ fprintf(results,"\nPage Totals: %f\t%f\t%f\t%d", *a,*b,*c,*d); } I get a warning saying "Warning: incompatible implicit declaration of built-in function 'fprinf' [enabled by default]" "error: 'results' undeclared (first use in this function)" in main fprintf works fine, its just when it comes to the subprogram/function it wont work. from my understanding it thinks that results is undeclared, so do i have to pass the name or location of the file to make it work?

    Read the article

  • R problem with apply + rbind

    - by Carl
    I cannot seem to get the following to work directory <- "./" files.15x16 <- c("15x16-70d.out", "15x16-71d.out") data.15x16<-rbind( lapply( as.array(paste(directory, files.15x16, sep="")), FUN=read.csv, sep=" ", header=F) ) What it should be doing is pretty straightforward - I have a directory name, some file names, and actual files of data. I paste the directory and file names together, read the data from the files in, and then rbind them all together into a single chunk of data. Except the result of the lapply has the data in [[]] - i.e., accessing it occurs via a[[1]], a[[2]], etc which rbind doesn't seem to accept. Suggestions?

    Read the article

  • External html, for headers/navs?

    - by J2
    Hey all, looking to see if theres a way to easily include external html file inside the body to repeat on every page, for instance: a navigator or something. Basically if i have to add a menu item or something, i have to go and paste all the code into each page. Or even a way to section parts off in Dreamweaver to update based on a file or something would work i suppose Ive done it before through JavaScript with document.writes and stuff, but it was very annoying, and probably a terrible way to go about doing so. thanks for your help!

    Read the article

  • Performance problems when loading local JSON via <script> elements in IE8

    - by Jens Bannmann
    I have a web page with some JS scripts that needs to work locally, e.g. from hard disk or a CD-ROM. The scripts load JSON data from files by inserting <script> tags. This worked fine in IE6, but now in IE8 it takes an enormous amount of time: it went from "instantly" to 3-10 seconds. The main data file is 45KB large. How can I solve this? I would switch from <script> tags to another method of loading JSON (ideally involving the new native JSON parser), but it seems locally loaded content cannot access the XMLHttpRequest object. Any ideas?

    Read the article

  • How to buffer stdout in memory and write it from a dedicated thread

    - by NickB
    I have a C application with many worker threads. It is essential that these do not block so where the worker threads need to write to a file on disk, I have them write to a circular buffer in memory, and then have a dedicated thread for writing that buffer to disk. The worker threads do not block any more. The dedicated thread can safely block while writing to disk without affecting the worker threads (it does not hold a lock while writing to disk). My memory buffer is tuned to be sufficiently large that the writer thread can keep up. This all works great. My question is, how do I implement something similar for stdout? I could macro printf() to write into a memory buffer, but I don't have control over all the code that might write to stdout (some of it is in third-party libraries). Thoughts? NickB

    Read the article

  • Trying to programmatically add a SQL Alias to the registry (just need help with parameters)

    - by nycgags
    Normally when we spin up a new instance we need to add an alias on one of our boxes so we can easily connect to it through SSMS using SQL Server Configuration Manager. I have written a batch file which adds the appropriate entry into the registry (it is actually two batch files, one for 32 bit and one for 64 bit). I am not sure how to get this to run with parameters though. I know %1 and %2 would be for the first two parameters, but when I run this, in the registry it actually puts %1 and %2 as the value pair. if you hardcode hostname and IP Address in place of %1 and %2 the batch file works as expected: REGEDIT4 ; @ECHO OFF ; CLS ; REGEDIT.EXE /S "%~f0" ; EXIT [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\ConnectTo] "%1"="DBMSSOCN,%2,1433"

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • Force blocking read after EAGAIN?

    - by Daniel Trebbien
    I have a file descriptor that is open for reading which may be non-blocking. What I need to do is simply read all data until reaching EOF and write that data to a writable file descriptor. As the process that performs this copying is not "aware" of anything that is going on around it, I don't think that I can do something useful while waiting for data, and I don't want to use a while loop (while errno is not EAGAIN) because I think that it would be wasteful. Is there a way to block or otherwise suspend execution of the copying process until data becomes available?

    Read the article

  • How do I test against a large number of regular expressions quickly and know which one matched?

    - by Jack
    I'm writing a program in .net where the user may provide a large number of regular expressions. For a given string, I need to figure out which regular expression matches that string (if more than one matches, I just need the first one that matches). However, if there are a large number of regular expressions this operation can take a very long time. I was somewhat hoping there would be something similar to flex for .net that would allow me to specify a large number of regular expressions yet quickly (O(n) according to Wikipedia for n = len(input string)) figure out which regular expression matches. Also, I would prefer not to implement my own regular expression engine :).

    Read the article

  • C#, Fastest (Best?) Method of Identifying Duplicate Files in an Array of Directories

    - by Nate Greenwood
    I want to recurse several directories and find duplicate files between the n number of directories. My knee-jerk idea at this is to have a global hashtable or some other data structure to hold each file I find; then check each subsequent file to determine if it's in the "master" list of files. Obviously, I don't think this would be very efficient and the "there's got to be a better way!" keeps ringing in my brain. Any advice on a better way to handle this situation would be appreciated.

    Read the article

  • How to create custom filenames in C?

    - by eSKay
    Please see this piece of code: #include<stdio.h> #include<string.h> #include<stdlib.h> int main() { int i = 0; FILE *fp; for(i = 0; i < 100; i++) { fp = fopen("/*what should go here??*/","w"); //I need to create files with names: file0.txt, file1.txt, file2.txt etc //i.e. file{i}.txt } }

    Read the article

  • Is file_get_contents multi line?

    - by Abs
    Hello all, Does file_get_contents maintain line breaks? I thought it did but I have tried this: if($conn){ $tsql = file_get_contents('scripts/CreateTables/SLR05_MATCH_CREATETABLES.sql'); $row = sqlsrv_query($conn, $tsql); print_r(sqlsrv_errors()); } The errors I get is that SQL Server complains that there is incorrect syntax. I get the same errors when I run the SQL Script without any line breaks, which suggests file_get_contents removes them? When I run the script normally (open the file in SQL Server Management Studio) and execute it, it works perfectly. So is there something that I can use that maintains line breaks etc? Or is there another problem here in using queries from a file with the SQL Server PHP Driver from Microsoft? Thanks all for any help

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >