Search Results

Search found 74171 results on 2967 pages for 'file structure'.

Page 123/2967 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • How do I see all of the directories shared using Nautilus Sharing Options?

    - by skyblue
    When you share a directory using samba, you can do this system-wide by editing the /etc/samba/smb.conf file (the advanced way) or by right-clicking a directory using Nautilus and selecting 'Sharing Options' (the easy way). However, while I can see what directories are shared system-wide by looking at /etc/samba/smb.conf, if I share directories using Nautilus, I do not know what directories that I (or other users) have shared. So how do I list all the directories that have been shared by samba using Nautilus Sharing Options for all the users on a system?

    Read the article

  • Download a file over an active SSH session

    - by Oli
    So I'm SSHed into my Ubuntu server from my Ubuntu desktop. I'm at a certain path and I want to download a file to my local filesystem (preferably the path I was at before I entered the SSH session). I could mount SSH and pull the file across by mouse but what if I was trying to get a root file and logging in by root directly is disallowed? Even if that wasn't the case (it isn't now), surely there must be a simple way of pulling back a file over an active SSH connection. Surely!

    Read the article

  • Structuring an input file

    - by Ricardo
    I am in the process of structuring a small program to perform some hydraulic analysis of pipe flow. As I am envisioning this, the program will read an input file, store the input parameters in a suitable way, operate on them and finally output results. I am struggling with how to structure the input file in a sane way; that is, in a way that a human can write it easily and a machine can parse it easily. A sample input file made available to me for a similar program is just a stream of comma-separated numbers that don't make much sense on their own, so that's the scenario I am trying to avoid. Though I am giving the details of my particular problem, I am more interested in general input-file structuring strategies. Is a stream of comma-separated values my best bet? Would I be better off using some sort of key:value structure? I don't know much about this, so any help will probably put me in a better track than I am now.

    Read the article

  • Setting up fastcgi on an Ubunutu server (socket file permissions issue)

    - by gray alien
    I am trying to set up mod_fcgid on my server. Part of the requirement is that Apache needs to create a socket file for mod_fcgid. I specified the folder for Apache to write the socket data to: /var/run/apache2/fcgid I then specified this file in my fcgid.conf file as follows: SocketPath /var/run/apache2/fcgid/sock I then changed the owner of the folder to www-data (the apache user) and gave the owner full permissions to the folder and its contents. I was able to run my test fcgi app then. When I rebooted the machine, y fastcgi app no longer worked. After some investigation, I found that ownership of /var/run/apache2/fcgid has been reset to root, and with permission reset to 700 I have the following questions: Is there something specific about the /var/run folder? why is the permissions being reset after a reboot? Should I move my socket file to another location (in case root automatically takes ownership of contents in this folder for security reasons?) I am running Ubuntu 10.0.4 LTS 64 bit

    Read the article

  • I have a new file type that I would like to handle in gnome how to get file properties?

    - by Mark
    I have a new file type that I would like to handle in gnome. Establishing a new mime type, a new thumbnailer and a new application to display the file type is done. But I need a new tab on the file properties page. This tab is analogous to the tabs for exif information for jpg files or for encoding information for video codecs, that says how long the video is. The files concerned are embroidery files and the file properties needed to be displayed are things like phisical dimentions of the design, how much thread will be used and how many colours. My belief is that with current gnome 3 this is not possible, am I right? Or should I take a wider view that in Ubuntu, anything is possible, just may be a bit difficult?

    Read the article

  • Ubuntu 12.04 tilts when trying to open large excel file with libreoffice or matlab

    - by user1565754
    I have an xlsx-file of size 27.3MB and when I try to open it either in Libreoffice or Matlab the whole system slows down My processor is AMD Sempron(tm) 140 Processor (should be about 2.7Ghz) Memory I have about 1.7GB Any ideas? I opened this file in Windows no problem...of course it took a few seconds to load but Ubuntu freezes with this file completely...smaller files of size 3MB, 5MB etc open just fine... thnx for support =)

    Read the article

  • How to execute a "name.desktop" file? [duplicate]

    - by Pubudug
    This question already has an answer here: Running a .desktop file in the terminal 10 answers #!/usr/bin/env xdg-open [Desktop Entry] Version=1.0 Type=Link Name=ShareFolder Icon=/usr/share/icons/DPL/NetworkShare.png Name[en_US]=ShareFolder URL=smb://servername/sharefolder This is my .desktop file which has a URL. How do I execute this desktop shortcut in the terminal? If i double click it works perfectly, but I need to execute this in terminal. I tried Running a .desktop file in the terminal. That didn't work for me either but it does if its an "application" shortcut. I'm trying here to execute "link" .desktop file, where you define in the type section (Type=Link) and (URL=smb://servername/sharefolder)

    Read the article

  • VCS for single user using file sync service

    - by StackUnder
    I'm trying to setup a version control for my one man project. My project files are in sync thanks to live mesh (but I could be using dropbox for that matter), between my laptop, my home pc and my office pc. I'm now using Netbeans with local file history. Sometimes it helps to revert to a previous state of one file. But imagine a situation when multiple files have problems. Correct me if I'm wrong but I would have to go to every file and revert to previous "safe" state. I don't like this approach, so I'm considering using a version control between SVN and GIT. I have some previous experience with SVN (TortoiseSVN) and I know that I can create a file:// repo. So, what a want to do is setup a VCS inside my synced folder just to have the ability to "revert" to a previous version if something goes wrong. Since everything's been synced to all computers, I wouldn't ever need to run an update. The file tree organization would be the following: C:...\SyncedFolder\MyProject\ Inside MyProject folder are all the project files plus a directory that has SVN or GIT info of my project (the repo/master). What VCS is best for this situation: SVN or GIT? Does SVN need to store all files from HEAD revision, thus "duplicating" all my project inside my synced folder? Does GIT eliminates this problem? Is this the best approach?

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • How to add some complexe structure in multiple places in an XML file

    - by Guillaume
    I have an XML file which has many section like the one below : <Operations> <Action [some attibutes ...]> [some complexe content ...] </Action> <Action [some attibutes ...]> [some complexe content ...] </Action> </Operations> I have to add an to every . It seems that an XSLT should be a good solution to this problem : <xsl:template match="Operations/Action[last()]"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> <Action>[some complexe content ...]</Action> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> My problem is that the content of my contains some xPath expressions. For exemple : <Action code="p_histo01"> <customScript languageCode="gel"> <gel:script xmlns:core="jelly:core" xmlns:gel="jelly:com.niku.union.gel.GELTagLibrary" xmlns:soap="jelly:com.niku.union.gel.SOAPTagLibrary" xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sql="jelly:sql" xmlns:x="jelly:xml" xmlns:xog="http://www.niku.com/xog" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <sql:param value="${gel_stepInstanceId}"/> </gel:script> </customScript> </Action> The '${gel_stepInstanceId}' is interpreted by my XSLT but I would like it to be copied as-is. Is that posible ? How ?

    Read the article

  • What is the best way to determine the path to the ISV directory?

    - by Luke Baulch
    MSCRM 4.0 Problem: I'm currently storing xml files in the ISV directory along with my web applications. From a plugin (or potentially a seperate app), I need to find an easy way to navigate to the ISV directory to read these xml files. This routine will be called extremely often, so processing minimization should be a strong consideration. Potential solutions: Registry: There is a registry key called 'WebSitePath' with the data 'C:\Inetpub\wwwroot\CRM'. Could potentially use this to build the path. (Will this be the same on all systems/installations?) IIS directory data: Looping through the DirectoryEntries of path '"IIS://localhost/W3SVC"' I could obtain the the web application where description is equal to "Microsoft Dynamics CRM". (Will this be the same on all systems/installations?) Webservice: Create one to read and return the data contained in these xml files The webservice would have easy access to its executing directory. Database: Store the data of these files in the database. Help: Can anyone suggest a simpler solution to obtaining and reading a file from the ISV directory? If not, which of the above solutions would be the quickest to process? Thanks for any and all contributions.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • Calculate total batch upload transfer percent with limited information

    - by GONeale
    Hi there, I have a system which uploads to a server file by file and displays a progress bar on file upload progress, then underneath a second progress bar which I want to indicate percentage of batch complete across all files queued to upload. Information and algorithms I can work out are: Bytes Sent / Total Bytes To Send = First progress bar (eg. 512KB of 1024KB (50%)) That works fine. However supposing I have two other files left to upload, but both file sizes are unknown (as this is only known once the file is about to commence upload, at which point it is compressed and file size is determined) how would I go about making my third progress bar? I didn't think this would be possible as I would need "Total Bytes Sent" / "Total Bytes To Send", to replicate the logic of my first progress bar on a larger scale, however I did get a version working: "Current file number we are on" / "total number of files to send" returning the percentage through the batch, however obviously will not incrementally update and it's pretty crude. So on further thinking I thought if I could incorporate the current file % with this algorithm I could perhaps get the correct progress percentage of my batch's current point. I tried this algorithm, but alas to no such avail (sorry to any math heads, it's probably quite apparent why it won't work) ("Current file number we are on" / "total number of files to send") * ("Bytes Sent" / "Total Bytes To Send") For example I thought I was on the right track when testing with this example: 2/3 (2nd of 3rd file) = 66% (this is right so far) but then when I added * 0.20 (for indicating only 20% of 2nd file has uploaded) we went back to 13%. What I need is only a little over 33%! I did try the inverse at 0.80 and a (2/3 * (2/3 * 0.2)) Can this be done without knowing entire bytes in batch to upload? Please help! Thank you!

    Read the article

  • Force Download Files Broken Headers wrong?

    - by Sinan
    if($_POST['mode']=="save") { $root = $_SERVER['DOCUMENT_ROOT']; $path = "/mcboeking/"; $path = $root.$path; $file_path = $_POST['path']; $file = $path.$file_path; if(!file_exists($file)) { die('file not found'); } else { header("Cache-Control: public"); header("Content-Description: File Transfer"); header('Content-Type: application/force-download'); header("Content-Disposition: attachment; filename=\"".basename($file)."\";" ); header("Content-Length: ".filesize($file)); readfile($file);}} As soon as i download the file and open it i get an error message. When i try to open a .doc i get the message : file structure is invalid. And when i try to open a jpg : This file can not be opened. It may be corrupt or a file format that Preview does not recognize. But when i download PDF files, they open without any problem. Can someone help me? P.s. i tried different headers including : header('Content-Type: application/octet-stream');

    Read the article

  • CakePHP: Interaction between different files/classes

    - by Alexx Hardt
    Hey, I'm cloning a commercial student management system. Students use the frontend to apply for lectures, uni staff can modify events (time, room, etc). The core of the app will be the algortihm which distributes the seats to students. I already asked about it here: How to implement a seat distribution algorithm for uni lectures Now, I found a class for that algorithm here: http://www.phpclasses.org/browse/file/10779.html I put the 'class GA' into app/vendors. I need to write a 'class Solution', which represents one object (a child, and later a parent for the evolutionary process). I'll also have to write functions mutate(), crossover() and fitness(). fitness calculates a score of a solution, based on if there are overbooked courses etc; crossover() is the crazy monkey sex function which produces a child from two parents, and mutate() modifies a child after crossover. Now, the fitness()-function needs to access a few related models, and their find()-functions. It evaluates a solution's fitness by checking e.g. if there are overbooked courses, or unfulfilled wishes, and penalizes that. Where would I put the ga.php, solution.php and the three functions? ga.php has to access the functions, but the functions have to access the models. I also don't want to call any App::import()'s from within the fitness()-function, because it gets called many thousand times when the algorithm runs. Hope someone can help me. Thanks in advance =)

    Read the article

  • TCP Message Structure with XML

    - by metdos
    Hello Everybody, I'm sending messages over TCP/IP and on the other side I parse TCP message.For example this is one of the sent messages. $DKMSG(requestType=REQUEST_LOGIN&requestId=123&username=metdos&password=123)$EDKMSG Clarification: $DKMSG( //Start )$EDKMSG //End requestType //Parameter REQUEST_LOGIN //Parameter Value Now I also want to add an Xml file to my message. I'm considering this option: $DKMSG(requestType=REQUEST_LOGIN&xmlData= <Item id="56D@MIT" type="SIGNAL"> <Label> <Text>56D</Text> <X1>10</X1> <Y1>40</Y1> <RotateAngle>90</RotateAngle> </Label> <X1>0</X1> <Y1>20</Y1> <Width>35</Width> <Height>10</Height> <Source>sgs3lr</Source> </Item> )$EDKMSG There are problems with this way: 1-)It doesn't seem right to me. 2-)I have to handle delimeter "=" with much more care or I need to change it in parameters. What are your suggestions, thanks.

    Read the article

  • Read/Write Files from the Content Provider

    - by drum
    I want to be able to create a file from the Content Provider, however I get the following error: java.io.Filenotfoundexception: /0: open file failed: erofs (read-only file system) What I am trying to do is create a file whenever an application calls the insert method from my Provider. This is the excerpt of the code that does the file creation: FileWriter fstream = new FileWriter(valueKey); BufferedWriter out = new BufferedWriter(fstream); out.write(valueContent); out.close(); Originally I wanted to use openFileOutput() but the function appears to be undefined. Anyone has a workaround to this problem? EDIT: I found out that I had to specify the directory as well. Here is a more complete snippet of the code: File file = new File("/data/data/Project.Package.Structure/files/"+valueKey); file.createNewFile(); FileWriter fstream = new FileWriter(file); BufferedWriter out = new BufferedWriter(fstream); out.write(valueContent); out.close(); I also enabled the permission <uses-permission android:name="android.permission.WRITE_INTERNAL_STORAGE" /> This time I got an error saying: java.io.IOException: open failed: ENOENT (No such file or directory)

    Read the article

  • PHP: parse $_FILES[] data in multidimesional array

    - by superUntitled
    I having been looking around for an answer to this and have not found an answer anywhere, I am hoping someone has done this before! I have a form that allows for dynamic duplication of the form fields. The form allows for file uploads and text input, so the data is sent in both $_POST and $_FILES arrays. The the initial set of inputs look like this: <input type="text" name="primary[1][text]" /> <input type="file" name="primary[1][file]" /> <input type="text" class="a" name="secondary[1][text][]" /> <input type="file" name="secondary[1][file][]" /> When duplicated the fields are incremented, they look like this: <input type="text" name="primary[2][text]" /> <input type="file" name="primary[2][file]" /> <input type="text" class="a" name="secondary[2][text][]" /> <input type="file" name="secondary[2][file][]" /> To complicate matters, the "secondary" form fields can also be duplicated (thus the [] at the end of the secondary name array. How can I parse the posted $_FILES array? I have tried something like this: foreach ($_FILES['question'] as $f_num) { echo $f['file']['name']; } but I get an "Undefined index: file... " error.

    Read the article

  • How to create playable FLV video from part of FLV file using FFMPEG?

    - by Ole Jak
    So we had real FLV video file. we had devided it into 3 parts (more or less equal, not looking into structure orcontext). We have taken second part and forgot about first 2. Video contained audio and video track. mp3 and on vp6. Is it any how possible to play thsat second part after sending to ffmpeg some command? So how to (using any FFMPEG API (in general in any programming language) or using command line) turn bytearray into playable video? (knowing what format video was created in and some other data like used codecs )

    Read the article

  • Parse and transform XML with missing elements into table structure

    - by dnlbrky
    I'm trying to parse an XML file. A simplified version of it looks like this: x <- '<grandparent><parent><child1>ABC123</child1><child2>1381956044</child2></parent><parent><child2>1397527137</child2></parent><parent><child3>4675</child3></parent><parent><child1>DEF456</child1><child3>3735</child3></parent><parent><child1/><child3>3735</child3></parent></grandparent>' library(XML) xmlRoot(xmlTreeParse(x)) ## <grandparent> ## <parent> ## <child1>ABC123</child1> ## <child2>1381956044</child2> ## </parent> ## <parent> ## <child2>1397527137</child2> ## </parent> ## <parent> ## <child3>4675</child3> ## </parent> ## <parent> ## <child1>DEF456</child1> ## <child3>3735</child3> ## </parent> ## <parent> ## <child1/> ## <child3>3735</child3> ## </parent> ## </grandparent> I'd like to transform the XML into a data.frame / data.table that looks like this: parent <- data.frame(child1=c("ABC123",NA,NA,"DEF456",NA), child2=c(1381956044, 1397527137, rep(NA, 3)), child3=c(rep(NA, 2), 4675, 3735, 3735)) parent ## child1 child2 child3 ## 1 ABC123 1381956044 NA ## 2 <NA> 1397527137 NA ## 3 <NA> NA 4675 ## 4 DEF456 NA 3735 ## 5 <NA> NA 3735 If each parent node always contained all of the possible elements ("child1", "child2", "child3", etc.), I could use xmlToList and unlist to flatten it, and then dcast to put it into a table. But the XML often has missing child elements. Here is an attempt with incorrect output: library(data.table) ## Flatten: dt <- as.data.table(unlist(xmlToList(x)), keep.rownames=T) setnames(dt, c("column", "value")) ## Add row numbers, but they're incorrect due to missing XML elements: dt[, row:=.SD[,.I], by=column][] column value row 1: parent.child1 ABC123 1 2: parent.child2 1381956044 1 3: parent.child2 1397527137 2 4: parent.child3 4675 1 5: parent.child1 DEF456 2 6: parent.child3 3735 2 7: parent.child3 3735 3 ## Reshape from long to wide, but some value are in the wrong row: dcast.data.table(dt, row~column, value.var="value", fill=NA) ## row parent.child1 parent.child2 parent.child3 ## 1: 1 ABC123 1381956044 4675 ## 2: 2 DEF456 1397527137 3735 ## 3: 3 NA NA 3735 I won't know ahead of time the names of the child elements, or the count of unique element names for children of the grandparent, so the answer should be flexible.

    Read the article

  • Javascript - why do I sometimes fail to read file content with GDownloadUrl?

    - by Daj pan spokój
    Hi everybody. I try to read some file with google's GDownloadUrl and it works only from time to time. failure means fileRows == "blah blah" success means fileRows == (real file content) I've noticed, however, that when I cease (with Firebug) the execution on line 3 for a couple of seconds, it succeeds more often. Maybe it is some kind of threading bug, then? Do You guys have any tip or idea? 1 var fileContent = "blah blah"; 2 availabilityFile = "input/available/" + date + ".csv"; 3 GDownloadUrl(availabilityFile, function(fileData) { 4 fileContent = fileData; 5 }); 6 fileRows = fileContent.split("\n");

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >