Search Results

Search found 40873 results on 1635 pages for 'split files'.

Page 32/1635 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How file permissions are stored in inode?

    - by Debadyuti Maiti
    Suppose there's two pc - "A" and "B". Then if A downloads a files from B , then what would be the file permission of that downloaded file? Is it possible that the downloaded file in A will have an Inode entry with all it's permissions from B & store B's user account as the owner ? If that's the case then is it impossible to change that files permission on A if "others" [as in user-group-others] doesn't have the right to write on that file? e.g. if this is the case , __x __x __x file.txt [On B] then what would be the file permission on A of that same file downloaded from B [e.g. through vsftpd]? __x __x __x file.txt [On A] or rw_x rw_x rw_x file.txt [On A] [i.e. defined by A's default umask value]

    Read the article

  • How to add a daemon to a quickly project

    - by darkrex1986
    Currently I'm developing an application with quickly which is divided in two parts: A graphical UI where the user could configure some things, and a daemon which do the most work in the background. I started with the UI, to create some windows for settings and so on. Now I want to start coding the daemon, but I have no clue how to implement the daemon in my quickly project. Could I simply paste the files of the daemon in project folder or is there an implementation method for adding new files to a quickly project? Or do I have to create a new project and merge them together?

    Read the article

  • Split WPF Style XAML Files

    - by anon
    Most WPF styles I have seen are split up into one very long Theme.xaml file. I want to split mine up for readability so my Theme.xaml looks like this: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/PresentationFramework.Aero;v3.0.0.0;31bf3856ad364e35;component/themes/aero.normalcolor.xaml"/> <ResourceDictionary Source="Controls/Brushes.xaml"/> <ResourceDictionary Source="Controls/Buttons.xaml"/> ... </ResourceDictionary.MergedDictionaries> </ResourceDictionary> The problem is that this solution does not work. I have a default button style which is BasedOn the default Aero style for a button: <Style x:Key="{x:Type Button}" TargetType="{x:Type Button}" BasedOn="{StaticResource {x:Type Button}}"> <Setter Property="FontSize" Value="14"/> ... </Style> If I place all of this in one file it works but as soon as I split it up I get StackOverflow exceptions because it thinks it is BasedOn itself. Is there a way around this? How does WPF add resources when merging resource dictionaries?

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • problem in split text by php

    - by moustafa
    Hi Team, I need to split the below contents by using the mobile number and email id. If the mobile number and email id found the content is splitted into some blocks. I am using the preg_match_all to check the mobile number and email id. For example : The Given Content: WANTED B'ful Non W'kingEdu Fair Girl for MBA 28/175 H'some Fair Mangal Gotra Boy Well estd. B'ness HI Income Status family Email: [email protected] PQM4 h'some Convent Educated B.techIIT, MS/28/5'9" Wkg in US frm repu fmlyseek b'ful,tall g i r l . Mo:09893029129 Em:[email protected] 5' 2" to 5' 5" b'ful QLFD girl for h a n d s o m e 5' 8" BPT (I) MPT G.Medalist (AUS) wkg in Australia Physiotherapist boy from V.respectable fmly of Surat / DeUli. Tel: 09825147614. EmaU: [email protected] The Desired output should be: 1st block: WANTED B'ful Non W'kingEdu Fair Girl for MBA 28/175 H'some Fair Mangal Gotra Boy Well estd. B'ness HI Income Status family Email: [email protected] 2nd block: PQM4 h'some Convent Educated B.techIIT, MS/28/5'9" Wkg in US frm repu fmlyseek b'ful,tall g i r l . Mo:09893029129 Em:[email protected] 3rd block: 5' 2" to 5' 5" b'ful QLFD girl for h a n d s o m e 5' 8" BPT (I) MPT G.Medalist (AUS) wkg in Australia Physiotherapist boy from V.respectable fmly of Surat / DeUli. Tel: 09825147614. EmaU: [email protected] I do no how to split it.I need some logics. When i am using preg_match_all when ever a mobile number found i inserted a new line and split the blocks.But email id is splitted independently.

    Read the article

  • Best way to split a string by word (SQL Batch separator)

    - by Paul Kohler
    I have a class I use to "split" a string of SQL commands by a batch separator - e.g. "GO" - into a list of SQL commands that are run in turn etc. ... private static IEnumerable<string> SplitByBatchIndecator(string script, string batchIndicator) { string pattern = string.Concat("^\\s*", batchIndicator, "\\s*$"); RegexOptions options = RegexOptions.Compiled | RegexOptions.IgnoreCase | RegexOptions.Multiline; foreach (string batch in Regex.Split(script, pattern, options)) { yield return batch.Trim(); } } My current implementation uses a Regex with yield but I am not sure if it's the "best" way. It should be quick It should handle large strings (I have some scripts that are 10mb in size for example) The hardest part (that the above code currently does not do) is to take quoted text into account Currently the following SQL will incorrectly get split: var batch = QueryBatch.Parse(@"-- issue... insert into table (name, desc) values('foo', 'if the go is on a line by itself we have a problem...')"); Assert.That(batch.Queries.Count, Is.EqualTo(1), "This fails for now..."); I have thought about a token based parser that tracks the state of the open closed quotes but am not sure if Regex will do it. Any ideas!?

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • N-gram split function for string similarity comparison

    - by Michael
    As part of excersise to better understand F# which I am currently learning , I wrote function to split given string into n-grams. 1) I would like to receive feedback about my function : can this be written simpler or in more efficient way? 2) My overall goal is to write function that returns string similarity (on 0.0 .. 1.0 scale) based on n-gram similarity; Does this approach works well for short strings comparisons , or can this method reliably be used to compare large strings (like articles for example). 3) I am aware of the fact that n-gram comparisons ignore context of two strings. What method would you suggest to accomplish my goal? //s:string - target string to split into n-grams //n:int - n-gram size to split string into let ngram_split (s:string, n:int) = let ngram_count = s.Length - (s.Length % n) let ngram_list = List.init ngram_count (fun i -> if( i + n >= s.Length ) then s.Substring(i,s.Length - i) + String.init ((i + n) - s.Length) (fun i -> "#") else s.Substring(i,n) ) let ngram_array_unique = ngram_list |> Seq.ofList |> Seq.distinct |> Array.ofSeq //produce tuples of ngrams (ngram string,how much occurrences in original string) Seq.init ngram_array_unique.Length (fun i -> (ngram_array_unique.[i], ngram_list |> List.filter(fun item -> item = ngram_array_unique.[i]) |> List.length) )

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

  • split a string into a key => value array in php

    - by andy-score
    +2-1+18*+7-21+3*-4-5+6x29 The above string is an example of the kind of string I'm trying to split into either a key = value array or something similar. The numbers represent the id of a class and -,+ and x represent the state of the class (minimised, expanded or hidden), the * represents a column break. I can split this into the columns easily using explode which gives and array with 3 $key = $value associations. eg. $column_layout = array( [0] => '+2-1+18' , [1] => '+7-21+3' , [2] => '-4-5+6x29' ) I then need to split this into the various classes from there, keeping the status and id together. eg. $column1 = array( '+' => 2 , '-' => 1 , '+' => 18 ) ... or $column1 = array( array( '+' , 2 ) , array( '-' , 1 ) , array( '+' , 18 ) ) ... I can't quite get my head round this and what the best way to do it is, so any help would be much appreciated.

    Read the article

  • Migrating Split Access Database from one domain to another (not working, details in Q)

    - by Expo_Rob
    Some background: I'm a programmer, not a network administrator, who has been asked to migrate some accounting software (Integrated Office Accounting version 3.2) from an existing domain (OLD_NETWORK) to a new domain (NEW_NETWORK). No-body at the office knows how it works under the hood. It is a split Access 2000 database with the back-end shared and on a file server (which is also the DC) using mapped drives. The DC is NT Server 4 SP 6. The new server is server 2003. The two networks are running independently (ie: two computers on each desk). I have been able to get new computers set up on NEW_NETWORK and working with the IOA software just perfectly but for one problem: The company here uses other entirely separate databases which access the tables IOA maintains (specifically the 'customers' table) via links. To switch between these systems, you press F11 then File-Open the appropriate database and away you go (this is necessary to maintain the permissions that the IOA system uses to protect the customers table). The entire database is Access 2000, the links go to other Access databases, SQL-Server is not involved in any way, nor is a migration to SQL server likely. If I can't migrate anything over, everything will stay as it is, and the NEW_NETWORK computers will not be used. The problem: When I try and update these seperate databases (I shall call one "BANK_ACCOUNT", but the name does not matter), it says "this recordset cannot be updated". It also will sometimes not pull information out of the 'customers' table (ie: date_entered) when looking at a report of everyone who opened a bank account on a certain day (ie: today). I have tried: Giving 'everyone' full control via. shared directory permissions Giving 'everyone' full control on a file system level Checking the permissions within Access (everyone has full read/write on all tables) Copying the entire server contents from one file server to another (ie: xcopy everything) Copying the entire local client files from one computer to another, putting them in the exact same position in the file system, with the same permissons (or full control to 'everyone'). Running as an Administrator Taking one of the NEW_NETWORK computers, having it join OLD_NETWORK and run the software (direct copy from a working system with identical drive mappings), this did not work Weeping openly My Question: Is there anything else I can try? (sorry for this being so long)

    Read the article

  • How can I copy files with names containing spaces and UNICODE, when using a shell script?

    - by LOlliffe
    I have a list of files that I'm trying to copy and move (using cp and mv) in a bash shell script. The problem that I'm running into, is that I can't get either command to recognize a huge number of files, seemingly because the filenames contain spaces and/or unicode characters. I couldn't find any switches to decode/re-encode these characters. Instead, for example, if I copy "file name.xml", I get "*.xml" and a script error that the file wasn't found for my result. Does anyone know settings or commands that will deal with these files?

    Read the article

  • Setup.exe files downloading without cab files over poor connections

    - by Colin
    We have customers who are trying to download a setup.exe file over mobile connections that appear to be very slow. They have reported that when they click on the downloaded setup.exe, the install wizard starts up, but part way through the wizard they get an error message indicating that a cab file is corrupt or missing. They couriered a problem tablet to us, and we downloaded the file without a problem but I could replicate the problem by using https to download the file (https is normally used to access the rest of the site, although it is not necessary for the download). When I did this the downloaded file was 2.8MB. It should be 8MB. I don't think that https is the root cause of the problem because I can see the download link in the browser history using http, so I know the customer tried to download using http. I think that the issue is that the poor connection is preventing a complete download, but the browser is acting as if it is complete. Is there a way to ensure the file is downloaded fully, or not at all? Why does the browser not indicate that the download is incomplete?

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • remuxing mpv files from h264 AVI files

    - by crankharder
    I have a bunch of, I think, x264 encoded AVIs that I'd like to convert to m4v so that I can play with Quicktime. Here's how I created them First I dump the vob from DVD with this: $ mplayer -dumpstream -dumpfile new.vob dvd://1 Then I compress it: $ mencoder -oac copy -o new.avi -ovc x264 -x264encopts crf=18 new.vob I tried doing this to convertthem to m4v, but it's blowing up... I tried dumping the h264/acc streams: $ mplayer new.avi -dumpvideo -dumpfile new.h264 $ mplayer new.avi -dumpaudio -dumpfile new.acc And remuxing(?) with MP4Box but I'm getting an error: $ MP4Box -add new.h264#video -add new.aac#audio new.m4v Cannot find H264 start code Error importing new.h264#video: BitStream Not Compliant So not sure what to do now...

    Read the article

  • Vbscript - Compare and copy files from folder if newer than destination files

    - by Kenny Bones
    Hi, I'm trying to design this script that's supposed to be used as a part of a logon script for alot of users. And this script is basically supposed to take a source folder and destination folder as basically just make sure that the destination folder has the exact same content as the source folder. But only copy if the datemodified stamp of the source file is newer than the destination file. I have been thinking out this basic pseudo code, just trying to make sure this is valid and solid basically. Dim strSourceFolder, strDestFolder strSourceFolder = "C:\Users\User\SourceFolder\" strDestFolder = "C:\Users\User\DestFolder\" For each file in StrSourceFolder ReplaceIfNewer (file, strDestFolder) Next Sub ReplaceIfNewer (SourceFile, DestFolder) Dim DateModifiedSourceFile, DateModifiedDestFile DateModifiedSourceFile = SourceFile.DateModified() DateModifiedDestFile = DestFolder & "\" & SourceFile.DateModified() If DateModifiedSourceFile < DateModifiedDestFile Copy SourceFile to SourceFolder End if End Sub Would this work? I'm not quite sure how it can be done, but I could probably spend all day figuring it out. But the people here are generally so amazingly smart that with your help it would take alot less time :)

    Read the article

  • How can I read pcap files in a friendly format?

    - by Tony
    a simple cat on the pcap file looks terrible: $cat tcp_dump.pcap ?ò????YVJ? JJ ?@@.?E<??@@ ?CA??qe?U?????h? .Ceh?YVJ?? JJ ?@@.?E<??@@ CA??qe?U?????z? .ChV?YVJ$?JJ ?@@.?E<-/@@A?CA??9????F???A&? .Ck??YVJgeJJ@@.??#3E<@3{n??9CA??P???F???<K? ??`.Ck??YVJgeBB ?@@.?E4-0@@AFCA??9????F?P????? .Ck???`?YVJ?""@@.??#3E?L@3?I??9CA??P???F????? ???.Ck?220-rly-da03.mx etc. I tried to make it prettier with: sudo tcpdump -ttttnnr tcp_dump.pcap reading from file tcp_dump.pcap, link-type EN10MB (Ethernet) 2009-07-09 20:57:40.819734 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776168808 0,nop,wscale 5> 2009-07-09 20:57:43.819905 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776169558 0,nop,wscale 5> 2009-07-09 20:57:47.248100 IP 67.23.28.65.42385 > 205.188.159.57.25: S 2644526720:2644526720(0) win 5840 <mss 1460,sackOK,timestamp 776170415 0,nop,wscale 5> 2009-07-09 20:57:47.288103 IP 205.188.159.57.25 > 67.23.28.65.42385: S 1358829769:1358829769(0) ack 2644526721 win 5792 <mss 1460,sackOK,timestamp 4292123488 776170415,nop,wscale 2> 2009-07-09 20:57:47.288103 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 1 win 183 <nop,nop,timestamp 776170425 4292123488> 2009-07-09 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: P 1:481(480) ack 1 win 1448 <nop,nop,timestamp 4292123568 776170425> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: P 1:18(17) ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: . ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: P 481:536(55) ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 67.23.28.65.42385 > 205.188.159.57.25: P 18:44(26) ack 536 win 216 <nop,nop,timestamp 776170454 4292123606> 2009-07-09 20:57:47.444112 IP 205.188.159.57.25 > 67.23.28.65.42385: P 536:581(45) ack 44 win 1448 <nop,nop,timestamp 4292123644 776170454> 2009-07-09 20:57:47.484114 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 581 win 216 <nop,nop,timestamp 776170474 4292123644> 2009-07-09 20:57:47.616121 IP 67.23.28.65.42385 > 205.188.159.57.25: P 44:50(6) ack 581 win 216 <nop,nop,timestamp 776170507 4292123644> 2009-07-09 20:57:47.652123 IP 205.188.159.57.25 > 67.23.28.65.42385: P 581:589(8) ack 50 win 1448 <nop,nop,timestamp 4292123855 776170507> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: P 50:56(6) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: F 56:56(0) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.668124 IP 67.23.28.65.49239 > 216.239.113.101.25: S 2642380481:2642380481(0) win 5840 <mss 1460,sackOK,timestamp 776170520 0,nop,wscale 5> 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: P 589:618(29) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: F 618:618(0) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 Well...that is much prettier but it doesn't show the actual messages. I can actually extract more information just viewing the RAW file. What is the best ( and preferably easiest) way to just view all the contents of the pcap file? UPDATE Thanks to the responses below, I made some progress. Here is what it looks like now: tcpdump -qns 0 -A -r blah.pcap 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: tcp 480 0x0000: 4500 0214 834c 4000 3306 f649 cdbc 9f39 [email protected] 0x0010: 4317 1c41 0019 a591 50fe 18ca 9da0 4681 C..A....P.....F. 0x0020: 8018 05a8 848f 0000 0101 080a ffd4 9bb0 ................ 0x0030: 2e43 6bb9 3232 302d 726c 792d 6461 3033 .Ck.220-rly-da03 0x0040: 2e6d 782e 616f 6c2e 636f 6d20 4553 4d54 .mx.aol.com.ESMT 0x0050: 5020 6d61 696c 5f72 656c 6179 5f69 6e2d P.mail_relay_in- 0x0060: 6461 3033 2e34 3b20 5468 752c 2030 3920 da03.4;.Thu,.09. 0x0070: 4a75 6c20 3230 3039 2031 363a 3537 3a34 Jul.2009.16:57:4 0x0080: 3720 2d30 3430 300d 0a32 3230 2d41 6d65 7.-0400..220-Ame 0x0090: 7269 6361 204f 6e6c 696e 6520 2841 4f4c rica.Online.(AOL 0x00a0: 2920 616e 6420 6974 7320 6166 6669 6c69 ).and.its.affili 0x00b0: 6174 6564 2063 6f6d 7061 6e69 6573 2064 ated.companies.d etc. This looks good, but it still makes the actual message on the right difficult to read. Is there a way to view those messages in a more friendly way? UPDATE This made it pretty: tcpick -C -yP -r tcp_dump.pcap Thanks!

    Read the article

  • Is there a distributed VCS that can manage large files?

    - by joelhardi
    Is there a distributed version control system (git, bazaar, mercurial, darcs etc.) that can handle files larger than available RAM? I need to be able to commit large binary files (i.e. datasets, source video/images, archives), but I don't need to be able to diff them, just be able to commit and then update when the file changes. I last looked at this about a year ago, and none of the obvious candidates allowed this, since they're all designed to diff in memory for speed. That left me with a VCS for managing code and something else ("asset management" software or just rsync and scripts) for large files, which is pretty ugly when the directory structures of the two overlap.

    Read the article

  • Should I split my website into different servers

    - by Nyxynyx
    I have a website where a user uploads photos, the photos gets resized and thumbnailed, and stored on the server. At the same time, there are some INSERTS into a MySQL table regarding the photo uploaded (like description, user id etc). The site currently runs off a managed VPS, and I love the support it provides. However it is expensive to store the many small photos and the resizing and thumbnailing processes do cause spikes on the app performance. (Amazon S3 is pretty expensive, especially considering the costs for uploading many small files) Question: Will it be a good idea to move the image processing operations and image storage to another server which is an unmanaged dedicated server with a much lower cost/gb and keep the current VPS for its 24/7 support and hosting the webapp? Or should I move the entire site to the dedicated server? VPS Specs 16 cores 2.4GHz (E5620) 1GB memory 60GB Storage 3.5TB transfer $43/mth Managed (24/7) Dedicated Specs i3 2130 2 cores 3.4+ GHz 16 GB DDR3 2 x 1TB SATA2 storage 15 TB transfer $79/mth Unmanaged (Weekdays support) Software used Apache PHP MySQL Solr PostgreSQL ImageMagick

    Read the article

  • Dreamweaver creating temp php files instead of working direct from the local test server files

    - by Dan382
    Trying to get MAMP running with Dreamweaver. When I preview a php file inside Dreamweavers 'Live View' mode instead of working direct from my file: http://localhost/php_test/timetest.php it creates its own temp file which looks like: http:// 127.0.0.1/php_test/ TMPWY5ZEM.php (I've added the additional spaces as stockoverflow assumed I was spamming) I know the localhost runs correctly as if I type the URL directly into a browser it runs fine. I've set up the Dreamweaver site correctly to the best of my knowledge, the details are below: Local site folder: /Applications/MAMP/htdocs/php_test Server Folder: /Applications/MAMP/htdocs/php_test Web URL: http://localhost/php_test/ Testing Server: PHP MySQL Any help?

    Read the article

  • how to get files from one PC to another using c#.net?

    - by shruti
    im using c#.net..i want to get the files that are on the server PC to my PC..both the PCs are connected through network.. i have given IP address of that PC in the path...but its not copying the files to my folder. im using the following code ...but its not working..kindly help me out.. File.Copy(Path.GetFileName(sourceFile), Path.GetDirectoryName(targetpath)); in sourceFile i have given IP address + folder path of the server PC and in the targetpath i have given the path of the folder of my PC to which i want to copy the files..

    Read the article

  • Where do I put the links to my Javascript/jQuery files in my html file?

    - by Qlidnaque
    I recently noticed that some (not all) of my javascript and jQuery scripts wouldn't work unless I put the link for the .js files nearer towards the bottom of the page instead of the head area where I put my links for my .css files. From what I understand, javascript can go in either places and it is recommended to not be put in the header as it slows down the page loading process as well. At the same time, if I put it in the body tag of the html file, it looks somewhat messy and was wondering what the best practice is for putting .js files in a cleanly place. Should I always put it at the very bottom right before the ending body tag? How do professional web developers handle this?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >