Search Results

Search found 82718 results on 3309 pages for 'large file download'.

Page 135/3309 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • What naming pattern can I use for sequential file naming (photos) when only their relative sequence is known?

    - by Juhele
    I got some old scanned photos and I want to put them in correct order. Unfortunately, I have no possibility to find out the exact order, only relative one like: "hm, this photo was surely taken after this one" and organize them step-by-step by manually changing numbering again and again. Is there any program (best free or opensource), where could I interactively put the photo in correct order straightaway (maybe by changing the order by dragging with mouse) and finally apply some file renaming to keep the file order? thank you in advance PS: running Windows (XP and 7), but if you know something for linux, let me kno too, please

    Read the article

  • Why is file sharing over internet still working, despite all firewall exceptions for filesharing being disabled?

    - by Triynko
    Every exception in my windows server firewall that starts with "File and Printer Sharing" is disabled (ordered by name, so that includes domain, public (active), and private profiles). The Network and Sharing Center's options for everything except password protected sharing are off. Why would I still be able to access a network share on that server via an address like "\\my.server.com\" over the internet? The firewall is on for all profiles and blocking incoming connections by default. A "netstat -an" command on the server reveals the share connection is occurring over port 445 (SMB). I restarted the client to ensure it was actually re-establishing a new connection successfully. Is the "Password protected sharing: On" option in Network and Sharing Center bypassing the firewall restrictions, or adding some other exception somewhere that I'm missing? EDIT: "Custom" rules are not the problem. It's the "built-in" rules for Terminal Services that was the problem. Can you believe port 445 (File Sharing Port) has to be wide open to the internet to use Terminal Services Licensing?)

    Read the article

  • Create text file named after a cell containing other cell data

    - by user143041
    I tried using the code below for the Excel program on my `Mac Mini using the OS X Version 10.7.2 and it keeps saying Error due to file name / path: (The Excel file I am creating is going to be a template with my formulas and macros installed which will be used over and over). Sub CreateFile() Do While Not IsEmpty(ActiveCell.Offset(0, 1)) MyFile = ActiveCell.Value & ".txt" fnum = FreeFile() Open MyFile For Output As fnum Print #fnum, ActiveCell.Offset(0, 1) & " " & ActiveCell.Offset(0, 2) Close #fnum ActiveCell.Offset(1, 0).Select Loop End Sub What Im trying to do: 1st Objective I would like to have the following data to be used to create a text file. A:A is what I need the name of the file to be. B:2 is the content I need in the text file. So, A2 - "repair-video-game-Glassboro-NJ-08028.txt" is the file name and B2 to be the content in the file. Next, A3 is the file name and B3 is the content for the file, etc. ONCE the content reads what is in cell A16 and B16 (length will vary), the file creation should stop, if not then I can delete the additional files created. This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? 2nd Objective I would like to have the following data to be used to create a text file. A:1 is what I need the name of the file to be. B:B is the content I want in the file. So, A2 - is the file name "geo-sitemap.xml" and B:B to be the content in the file (ignore the .xml file extension in the photo). ONCE the content cell reads what is in cell "B16" (length will vary), the file creation should stop, if not then I can adjust the cells that have need content (formulated content you see in the image is preset for 500 rows). This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? I can Provide the content in the cells that are filled in by excel formulas that are not not to be included in the .txt files. It is ok if it is not possible. I can delete the extra cells that are not populated (based on the data sheet). Please let me know if you need any more additional information or clarity and I will be happy to provide it.

    Read the article

  • /etc/hosts file for a multi-homed, multi-domain machine?

    - by threecheeseopera
    I have a server (debian) with two network interfaces that I would like to host multiple services and domains on; it is not entirely clear to me how the hosts file should be set up. Example: eth0, bound to WAN interface 1.2.3.4: mail.example.com www.example.com eth0:1, bound to WAN interface 1.2.3.5: www.other-domain.com eth1, bound to LAN 192.168.1.123: some-clever-hostname What should my hosts file look like? (including localhost,localhost.localdomain, etc.) Should I use DNS for some of these entries? Which ones? Thanks!

    Read the article

  • How do you limit the bandwidth for a file copy?

    - by wizard
    I've got an old windows 2000 box in a remote location with a T1 connection and a vpn to my location. I normally use smb mounts to transfer files but now it's time to decommission the server and copy it's backups to my location. I have about 40 gigabytes (compressed) to copy. I'm prepared for it to take a long time, but I have a few caveats. I need to limit the bandwidth so terminal service connections to the site are not affected I want to be able to resume a partial transfer There are a few small files and several large files (10-20 gigabytes). I'm familiar with rsync on *nix platforms but have had bad luck with windows and I don't know that it will really keep partially transfered files. What do you use?

    Read the article

  • pscp: how to copy a file from a windows machine to a non-home location on another windows machine?

    - by help
    I want to copy a file from C:\temp on MachineA to C:\final on MachineB. I tried to use the following command, but it gave me an error (permission denied): C:\PROGRA~1\putty\pscp.exe -i C:\PROGRA~1\cwRsync\home\rcadmin\.ssh\id_rsa_private.ppk [email protected]:C:\final\test.txt C:\temp\test.txt It turned out I can only access C:\users\direcpc in my source computer. So if I put the file in C:\users\direcpc\text.txt, then it would work: C:\PROGRA~1\putty\pscp.exe -i C:\PROGRA~1\cwRsync\home\rcadmin\.ssh\id_rsa_private.ppk [email protected]:/test.txt C:\temp\test.txt But I want to access any location on my source computer instead of just my user home directory, is there a way to do this?

    Read the article

  • How can I open a file as read-only from Windows Explorer?

    - by Daniel Daranas
    Is there an easy way to open a file as read-only from Windows Explorer? My inmediate interest is in a Microsoft Access file. I am doing some sanity checks in old MS Access databases and I see that their date is automatically updated when I open them. I don't like this, since it will look like all the old files have been modified today. I am working with Windows XP. Update: As Yoda said, "Do, or do not. There is no try." In my case, it was "do not". I ended up copying the entire (big) folder tree to MyDocuments, and then opening all the databases from there.

    Read the article

  • How to remove file permissions from an external hard drive?

    - by user2540416
    My macbook recently died and I am currently trying to figure out how to copy my data. What I did was, I took out the hard drive, put it in an enclosure and plugged it in to my other laptop that runs linux. The problem is, I cannot copy files from the hard drive due to file permissions. I tried to access the hard drive as root. But I still cannot copy files. How do I remove file permissions from the harddrive?

    Read the article

  • SQL SERVER – FIX: ERROR Msg 5169, Level 16: FILEGROWTH cannot be greater than MAXSIZE for file

    - by pinaldave
    I am writing this blog post right after I resolve this error for one of the system. Recently one of the my friend who is expert in infrastructure as well private cloud was working on SQL Server installation. Please note he is seriously expert in what he does but he has never worked SQL Server before and have absolutely no experience with its installation. He was modifying database file and keep on getting following error. As soon as he saw me he asked me where is the maxfile size setting so he can change. Let us quickly re-create the scenario he was facing. Error Message: Msg 5169, Level 16, State 1, Line 1 FILEGROWTH cannot be greater than MAXSIZE for file ‘NewDB’. Creating Scenario: CREATE DATABASE [NewDB] ON PRIMARY (NAME = N'NewDB', FILENAME = N'D:\NewDB.mdf' , SIZE = 4096KB, FILEGROWTH = 1024KB, MAXSIZE = 4096KB) LOG ON (NAME = N'NewDB_log', FILENAME = N'D:\NewDB_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) GO Now let us see what exact command was creating error for him. USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB ) GO Workaround / Fix / Solution: The reason for the error is very simple. He was trying to modify the filegrowth to much higher value than the maximum file size specified for the database. There are two way we can fix it. Method 1: Reduces the filegrowth to lower value than maxsize of file USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024KB ) GO Method 2: Increase maxsize of file so it is greater than new filegrowth USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB, MAXSIZE = 4096MB) GO I think this blog post will help everybody who is facing similar issues. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Changing the BizTalk message output file name

    - by Bill Osuch
    By default, BizTalk creates the filename of the message dropped to a send port as %MessageID%, which is the unique identifier (GUID) of the message. What if you want to create your own filename? To start, create a simple schema, and a basic orchestration that will receive the message and send it right back out, like this: If you deploy this and wire up the ports, you can drop an xml file into your receive port and have it come out at your send port named something like {7A63CAF8-317B-49D5-871F-9FD57910C3A0}.xml. Now, we'll create a new message with a custom filename. First, create a new orchestration variable called NewFileName, of the type System.String. Next, create a second message using the same schema as the message you're receiving in the Receive shape. Now, drag a Construct Message shape to the orchestration. In the shape's properties, set Messages Constructed to be the new message you just created. Double click the Message Assignment shape (inside the Construct shape...) and paste in the following code: Message_2 = Message_1;   NewFileName = Message_1(FILE.ReceivedFileName); NewFileName = NewFileName.Replace(".xml","_"); NewFileName = NewFileName + "output_" + System.DateTime.Now.Year.ToString() + "-" + System.DateTime.Now.Month.ToString();   Message_2(FILE.ReceivedFileName) = NewFileName; Here we make a copy of the received message, get it's original file name (ReceivedFileName), replace its extension with an underscore, and date-stamp it. Finally, add a Send shape and a Port to the surface, and configure them to send the message you just created. You should wind up with an orchestration like this: Deploy it, and create a new send port. It should be just about identical to the first send port, except this time the file name will be "%SourceFileName%.xml" (without the quotes of course). Fire up the application, drop in a test file, and you should now get both the xml file named with a GUID, and a second file named something along the lines of "MySchemaTestFile_output_2011-6.xml".

    Read the article

  • FileOpenPicker/FileSavePicker doesn't allow *.* wildcard file associations

    - by mbrit
    On Twitter, Matthias Jauernig commented that the FileOpenPicker and FileSavePicker doesn't allow *.* wildcard file associations. I was relaxed about this and wrote back that it was related to sandboxing implying it was a "good thing", however as Matthias commented back, perhaps it's not.In Metro-style the sandboxing works that if something gives you a file (e.g. the picker, or a share operation), you can access it regardless of where on the system. If you find the file yourself, you have to declare the type.The reason why I think it's related to sandboxing is because if you work with files programmatically you have to be explicit about the file types. This is to stop malware that you think is only interested in - say .PDF files, scanning and uploading any .EML files that it can find on the machine. It follows then on the pickers that restriction would continue. It allow's the retail store team to validate that an app is likely to behave itself. If it's an app that works with images, locking down the picker so that it can only access image file types makes sense.However Matthias mentioned that he has an app that should allow files of any arbitrary file. That fits more into the "if the user selects it, it must be OK" camp than the "programmatic scanning" camp. So now I'm left wondering why the picker doesn't allow any type to be selected.I think then maybe the decision comes down to simplicity. A lot of the decisions in Metro-style design relate to ideas about "zero intimidation". Allow the user to select any file is too much like Old Windows, and not enough like Reimagined Windows. What happens in Matthias's app if the user selects Explorer.exe as the file he or she wants to work with? I guess it's fine if you expect your user to know what they're doing (Old Windows), but not so fine if you're expecting a three year old to work with it (Reimagined Windows).

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • video file download issue

    - by George2
    Hello everyone, I am using VSTS 2008 + C# + .Net 3.5 + Silverlight 3.0 + ASP.Net to develop a Silverlight application (a video media player) in browser and the function is simple, just use MediaElement to play a remote video file. The remote server is Windows Server 2008 + IIS 7.0 + IIS Media Bit Rate Throttling Control. Since the request media URL can be discovered (e.g. from traffic sniffer), and I want to know how to prevent from download directly from the Url? i.e. I want end user to use my Silverlight media player application in browser to play the file, prevent them from download to local directly. Any easy and quick solution or reference code/documents? thanks in advance, George

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • strange multiple files download - NSURLConnection

    - by Georg
    hi all, I encounter a problem by following your comment. I would like to download different file at same time with different delegate: .h: NSMutableData *fileData; .m: NSString *imgfile = [NSString stringWithFormat:@"http://xxxx/01.jpg"]; NSURL *fileURL1 = [NSURL URLWithString:imgfile]; NSString *audiofile = [NSString stringWithFormat:@"http://xxxx/01.mp3"]; NSURL *fileURL2 = [NSURL URLWithString:audiofile]; NSURLRequest *request1 = [NSURLRequest requestWithURL:fileURL1 cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0 ]; NSURLRequest *request2 = [NSURLRequest requestWithURL:fileURL2 cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:10.0 ]; NSArray *connections = [[NSArray alloc] initWithObjects: [[NSURLConnection alloc] initWithRequest:request1 delegate:self ], [[NSURLConnection alloc] initWithRequest:request2 delegate:self ], nil]; - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { fileData = [[NSMutableData alloc] init]; } - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [fileData appendData:data]; } - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { NSLog(@"Unable to fetch data"); } ok, the download process works, but, the file size of jpg and mp3 are incorrect, the only correct thing is the total file size (jpg+mp3), please could you have a look on the code, what is missing? Another question is, I put the file in a NSMutableArray, my question is, how to check which index of array is the correct file type (jpg and mp3)? because I need to save them to the device folder.

    Read the article

  • Download file using webclient

    - by user79127
    I try to download a file from https site and every time the file is saved to my machine it is only 1KB. The file is supposed to be 1MB. I am using Webclient. string strFile = @"c:\myfile.txt"; WebClient wc = new WebClient(); wc.Credentials = new System.Net.NetworkCredential("userid", "pw"); wc.DownloadFile("https://www.mysite.come/myfile.txt", strFile); Do I miss anything?

    Read the article

  • Interpolating Large Datasets On the Fly

    - by Karl
    Interpolating Large Datasets I have a large data set of about 0.5million records representing the exchange rate between the USD / GBP over the course of a given day. I have an application that wants to be able to graph this data or maybe a subset. For obvious reasons I do not want to plot 0.5 million points on my graph. What I need is a smaller data set (100 points or so) which accurately (as possible) represents the given data. Does anyone know of any interesting and performant ways this data can be achieved? Cheers, Karl

    Read the article

  • Why would HTTP transfer via wget be faster than lftp/pget?

    - by jondahl
    I'm building software that needs to do massive amounts of file transfer via both HTTP and FTP. Often times, I get faster HTTP download with a multi-connection download accelerator like axel or lftp with pget. In some cases, I've seen 2x-3x faster file transfer using something like: axel http://example.com/somefile or lftp -e 'pget -n 5 http://example.com/somefile;quit' vs. just using wget: wget http://example.com/somefile But other times, wget is significantly faster than lftp. Strangly, this is even true even when I do lftp with get, like so: lftp -e 'pget -n 1 http://example.com/somefile;quit' I understand that downloading a file via multiple connections won't always result in a speedup, depending on how bandwidth is constrained. But: why would it be slower? Especially when calling lftp/pget with -n 1?

    Read the article

  • Large XML files in dataset (outofmemory)

    - by dklein
    Hi folks, I am currently trying to load a slightly large xml file into a dataset. The xml file is about 700 MB and every time I try to read the xml it needs plenty of time and after a while it throws an "out of memory" exception. DataSet ds = new DataSet(); ds.ReadXml(pathtofile); The main problem is, that it is necessary for me to use those datasets (I use it to import the data from xml file into a sybase database (foreach table, foreach row, foreach column)) and that I have no scheme file. I already googled a while, but I did only find solutions that won't be usable for me.

    Read the article

  • ajax upload cannot process JSON response or gives download popup

    - by Jorre
    I'm using the AJAX plugin from Andris Valums: AJAX Upload ( http://valums.com/ajax-upload/ ) Copyright (c) Andris Valums It works great, except for the fact that I cannot send proper JSON as a response. I'm setting the headers to 'Content-Type', 'application/json' before sending the JSON-encoded response, and in the plugin I'm saying that I'm expecting JSON: responseType: "json", This gives me a download popup asking to download the JSON/REPONSE file. The strange thing is, when I don't ass the correct "Content-Type" to my response, it works. Of course I want to pass the correct response type, because all my jQuery 1.4 calls are depending on correct JSON. Does anyone else have had this same problem or is there anyone out there willing to try this out ? I'd love to use this plugin but only when I can return proper JSON with the correct content-type Thanks for all your help!

    Read the article

  • Optimizing performance of large ASP.NET applications

    - by NLV
    Hello, I'm building a asp.net web application with lots and lots of controls and huge volumes of data. My application is very slow and it is taking a large amount of time to load the data into the .net controls like grid, tree view etc. I also have some ajaxified pages and controls in my application. I want to reduce the page load time in each postbacks. What are the standards/best practices to be followed while developing large asp.net applications? Thank you. NLV

    Read the article

  • How to obtain position in file (byte-position) from java scanner?

    - by september2010
    How to obtain a position in file (byte-position) from the java scanner? Scanner scanner = new Scanner(new File("file")); scanner.useDelimiter("abc"); scanner.hasNext(); String result = scanner.next(); and now: how to get the position of result in file (in bytes)? Using scanner.match().start() is not the answer, because it gives the position within internal buffer.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >