Search Results

Search found 17634 results on 706 pages for 'django multi db'.

Page 166/706 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Database frontend for multiple db engines

    - by xeroxed_yeti
    Hey Stackoverflow, yeah it's spring and a lot of things happens to me... Also changing some software things at my computer, because suddenly everything seems to be boring after starting my laptop. I even changed my wallpaper!!! Besides I'm looking for a new database frontend and after using google with serveral queries I didn't find the right software. You have to know, my laptop and me are very very special :) I'm looking for a database frontend which should have following features can access PostgreSQL and MySQL databases can handle schemata overs a nice sql query tool supports an import and export functionality (something like tab separated text files) it for free looks awesome - every time when a college come to my office he must get the feeling: oh boy, this man really knows his job and should get more money! At the moment I used phpmyadmin, phppgadmin, pgadminIII, mysqladmin and dbVisualizer. Furthermore I was a big fan of the aqua datastudio until it became commercial. This tools offers a great variety of functionalities which can simplify programmes live. However, now you have to buy a license...I'm a scientist and money for software is limited =) So it's my first time (question) here at stackoverflow please be cheerful :)

    Read the article

  • multi-line pattern matching in pyhon

    - by Horace Ho
    A periodic computer generated message (simplified): Hello user123, - (604)7080900 - 152 - minutes Regards Using python, how can I extract "(604)7080900", "152", "minutes" (i.e. any text following a leading "- " pattern) between the two empty lines (empty line is the \n\n after "Hello user123" and the \n\n before "Regards"). Even better if the result string list are stored in an array. Thanks!

    Read the article

  • Funny characters in my db

    - by hdx
    My web app is breaking when I try edit a certain content type and I'm pretty sure it is because of some weird characters in my database. So when I do: SELECT body FROM message WHERE id = 666 it returns: <p>⢠<span></span></p><p><br /></p><p><em><strong>NOTE:</strong> Please remember to use your to participate in the discussion.</em></p> However when I try to count how many documents have those characters postgres complains: foo_450_prod=# SELECT COUNT(*) FROM message WHERE body LIKE'%â¢%'; ERROR: invalid byte sequence for encoding "UTF8": 0xe2a225 HINT: This error can also happen if the byte sequence does not match the encodi Does anybody know what the issue is and how I can query for those funny characters? Thanks in advance!

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • Clone DB table row through MVC in SQL Server

    - by sslepian
    Is there a simple solution for duplicating table rows in SQL Server as well as all table rows with foreign keys pointing to the cloned table row? I've got a "master" table and a bunch of "child" tables which have a foreign key into the ID of the master table. I need to not only create a perfect copy of the master table, but clone each and every child table referencing the master table. Is there a simpler way to do this than creating a new row in the master table, copying in the information from the row to be cloned, then going through each child table and doing the same with each row pointing to the cloned row in the master table? I'm using a SQL Server 2005 Database accessed through C# ASP.net MVC 1.0.

    Read the article

  • CSS Multiple multi-column divs

    - by accelerate
    I have a bunch of items (text, image, mixed content, etc) that I want to display. The user can define what row and what column that item appears in. For example, in row 1, there could be two items/columns, both images. In row two, there could be three items / columns, one with an image, two others as pure text. Oh, and the user may specify the width of any particular column/image/item. I have a solution that uses multiple tables that works. In essence, each row is a new table. This works for the most part. I'm wondering if I can use just divs? Now my CSS foo is lacking, and I tried to copy examples from the web, and I haven't been able to get it working. Right now I have something like this: `[for each row] [for each column] [content] ` But everything is overlapping each other. I've also tried using "position: relative", but things look even more borked. So can divs actually be used for multiple rows and different number of columns?

    Read the article

  • Java multi Generic collection parameters complie error

    - by Geln Yang
    Hi, So strange!Please have a look the code first: public class A { } public class B extends A { } public class C extends A { } public class TestMain { public <T extends A> void test(T a, T b) { } public <T extends A> void test(List<T> a, List<T> b) { } public static void main(String[] args) { new TestMain().test(new B(), new C()); new TestMain().test(new ArrayList<B>(), new ArrayList<C>()); } } The statement "new TestMain().test(new ArrayList(), new ArrayList())" get a "Bound mismatch" compile error, while "new TestMain().test(new B(), new C())" is compiled ok. Bound mismatch: The generic method test(T, T) of type TestMain is not applicable for the arguments (ArrayList, ArrayList). The inferred type ArrayList is not a valid substitute for the bounded parameter It seems the type of the second generic List parameter is limited by the Type of the first.Is it a feature or a bug of the compile program? ps, jdk:1.6,IDE:Eclipse 3.5.1

    Read the article

  • VB working with SQL DB - end of row count, keeps looping

    - by Tramd
    I'm adding to a combo box an ID and a name that i'm pulling from a database. My problem is that for some reason my loop doesnt end once it reaches the end of the records in the database table. Here's my code: For intcount = 0 To dtOrders.Rows.Count - 1 cmbSearch.Items.Add(dtOrders.Rows(intcount)("EmployeeID").ToString & " " & dtOrders.Rows(intcount)("EmployeeLastName").ToString & ", " & dtOrders.Rows(intcount)("EmployeeFirstName").ToString) Next Shouldnt the .rows.count - 1 stop it once it reaches the last record? It loops 4 times through.

    Read the article

  • Looking for a fast, compact, streamable, multi-language, strongly typed serialization format

    - by sanity
    I'm currently using JSON (compressed via gzip) in my Java project, in which I need to store a large number of objects (hundreds of millions) on disk. I have one JSON object per line, and disallow linebreaks within the JSON object. This way I can stream the data off disk line-by-line without having to read the entire file at once. It turns out that parsing the JSON code (using http://www.json.org/java/) is a bigger overhead than either pulling the raw data off disk, or decompressing it (which I do on the fly). Ideally what I'd like is a strongly-typed serialization format, where I can specify "this object field is a list of strings" (for example), and because the system knows what to expect, it can deserialize it quickly. I can also specify the format just by giving someone else its "type". It would also need to be cross-platform. I use Java, but work with people using PHP, Python, and other languages. So, to recap, it should be: Strongly typed Streamable (ie. read a file bit by bit without having to load it all into RAM at once) Cross platform (including Java and PHP) Fast Free (as in speech) Any pointers?

    Read the article

  • multi-core processing in R on windows XP - via doMC and foreach

    - by Jan
    Hi guys, I'm posting this question to ask for advice on how to optimize the use of multiple processors from R on a Windows XP machine. At the moment I'm creating 4 scripts (each script with e.g. for (i in 1:100) and (i in 101:200), etc) which I run in 4 different R sessions at the same time. This seems to use all the available cpu. I however would like to do this a bit more efficient. One solution could be to use the "doMC" and the "foreach" package but this is not possible in R on a Windows machine. e.g. library("foreach") library("strucchange") library("doMC") # would this be possible on a windows machine? registerDoMC(2) # for a computer with two cores (processors) ## Nile data with one breakpoint: the annual flows drop in 1898 ## because the first Ashwan dam was built data("Nile") plot(Nile) ## F statistics indicate one breakpoint fs.nile <- Fstats(Nile ~ 1) plot(fs.nile) breakpoints(fs.nile) # , hpc = "foreach" --> It would be great to test this. lines(breakpoints(fs.nile)) Any solutions or advice? Thanks, Jan

    Read the article

  • List of all index & index columns in SQL Server DB

    - by Anton Gogolev
    How do I get a list of all index & index columns in SQL Server 2005+? The closest I could get is: select s.name, t.name, i.name, c.name from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id inner join sys.indexes i on i.object_id = t.object_id inner join sys.index_columns ic on ic.object_id = t.object_id inner join sys.columns c on c.object_id = t.object_id and ic.column_id = c.column_id where i.index_id > 0 and i.type in (1, 2) -- clustered & nonclustered only and i.is_primary_key = 0 -- do not include PK indexes and i.is_unique_constraint = 0 -- do not include UQ and i.is_disabled = 0 and i.is_hypothetical = 0 and ic.key_ordinal > 0 order by ic.key_ordinal which is not exactly what I want. What I want is to list all user-defined indexes (which means no indexes which support unique constraints & primary keys) with all columns (ordered by how do they apper in index definition) plus as much metadata as possible.

    Read the article

  • Multi-module maven build : different result from parent and from module

    - by Albaku
    I am migrating an application from ant build to maven 3 build. This app is composed by : A parent project specifying all the modules to build A project generating classes with jaxb and building a jar with them A project building an ejb project 3 projects building war modules 1 project building an ear Here is an extract from my parent pom : <groupId>com.test</groupId> <artifactId>P</artifactId> <packaging>pom</packaging> <version>04.01.00</version> <modules> <module>../PValidationJaxb</module> <-- jar <module>../PValidation</module> <-- ejb <module>../PImport</module> <-- war <module>../PTerminal</module> <-- war <module>../PWebService</module> <-- war <module>../PEAR</module> <-- ear </modules> I have several problems which I think have the same origin, probably a dependency management issue that I cannot figure out : The generated modules are different depending on if I build from the parent pom or a single module. Typically if I build PImport only, the generated war is similar to what I had with my ant build and if I build from the parent pom, my war took 20MB, a lot of dependencies from other modules had been added. Both wars are running well. My project PWebService has unit tests to be executed during the build. It is using mock-ejb which has cglib as dependency. Having a problem of ClassNotFound with this one, I had to exclude it and add a dependency to cglib-nodep (see last pom extract). If I then build only this module, it is working well. But if I build from the parent project, it fails because other dependencies in other modules also had an implicit dependency on cglib. I had to exclude it in every modules pom and add the dependency to cglib-nodep everywhere to make it run. Do I miss something important in my configuration ? The PValidation pom extract : It is creating a jar containing an ejb with interfaces generated by xdoclet, as well as a client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-validation</artifactId> <packaging>ejb</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>3.2.5.ga</version> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> </dependency> ... [other libs] ... </dependencies> <build> ... <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <ejbVersion>2.0</ejbVersion> <generateClient>true</generateClient> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> ... The PImport pom extract : It depends on both Jaxb generated jar and the ejb client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-import</artifactId> <packaging>war</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com.test</groupId> <artifactId>P-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... </dependencies> The PWebService pom extract : <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-webservice</artifactId> <packaging>war</packaging> <properties> <jersey.version>1.14</jersey.version> </properties> <dependencies> <dependency> <groupId>com.sun.jersey</groupId> <artifactId>jersey-servlet</artifactId> <version>${jersey.version}</version> </dependency> <dependency> <groupId>com.rte.etso</groupId> <artifactId>etso-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... <dependency> <groupId>org.mockejb</groupId> <artifactId>mockejb</artifactId> <version>0.6-beta2</version> <scope>test</scope> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib-full</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> <scope>test</scope> </dependency> </dependencies> Many thanks

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

  • A quick over view of facebook's db?

    - by Matt
    Hey guys I find it hard to believe that Facebook uses simple sql, surely it would use some other method but lets assume for now it does use sql how would the code assimilating the 'wall' work? Lets say that there is three tables (just for the example) Friends: id (entry key) - uid(your id) - fid (your mates' id) Wall:id (entry key) - username - comment - time - commentcount comments: id (entry key) - wid (wall id (original comment)) - reply - time Lets forget about the like part and report etc, as well as mod things (ip, ban etc.) How would this work? Select wall.id, wall.username, wall.comment, wall.time, wall.commentcount, comments.wid, comments.reply, comments.time FROM wall inner join comments ON wall.id=comments.wid ORDER BY wall.time; That's your own wall but how do they get friend's? A heap of unions?

    Read the article

  • Avoid loading unnecessary data from db into objects (web pages)

    - by GmGr
    Really newbie question coming up. Is there a standard (or good) way to deal with not needing all of the information that a database table contains loaded into every associated object. I'm thinking in the context of web pages where you're only going to use the objects to build a single page rather than an application with longer lived objects. For example, lets say you have an Article table containing id, title, author, date, summary and fullContents fields. You don't need the fullContents to be loaded into the associated objects if you're just showing a page containing a list of articles with their summaries. On the other hand if you're displaying a specific article you might want every field loaded for that one article and maybe just the titles for the other articles (e.g. for display in a recent articles sidebar). Some techniques I can think of: Don't worry about it, just load everything from the database every time. Have several different, possibly inherited, classes for each table and create the appropriate one for the situation (e.g. SummaryArticle, FullArticle). Use one class but set unused properties to null at creation if that field is not needed and be careful. Give the objects access to the database so they can load some fields on demand. Something else? All of the above seem to have fairly major disadvantages. I'm fairly new to programming, very new to OOP and totally new to databases so I might be completely missing the obvious answer here. :)

    Read the article

  • Trouble in ActiveX multi-thread invoke javascript callback routine

    - by code0tt
    everyone. I'm get some trouble in ActiveX programming with ATL. I try to make a activex which can async-download files from http server to local folder and after download it will invoke javascript callback function. My solution: run a thread M to monitor download thread D, when D is finish the job, M is going to terminal themself and invoke IDispatch inferface to call javascript function. **************** THERE IS MY CODE: **************** /* javascript code */ funciton download() { var xfm = new ActiveXObject("XFileMngr.FileManager.1"); xfm.download( 'http://somedomain/somefile','localdev:\\folder\localfile',function(msg){alert(msg);}); } /* C++ code */ // main routine STDMETHODIMP CFileManager::download(BSTR url, BSTR local, VARIANT scriptCallback) { CString csURL(url); CString csLocal(local); CAsyncDownload download; download.Download(this, csURL, csLocal, scriptCallback); return S_OK; } // parts of CAsyncDownload.h typedef struct tagThreadData { CAsyncDownload* pThis; } THREAD_DATA, *LPTHREAD_DATA; class CAsyncDownload : public IBindStatusCallback { private: LPUNKNOWN pcaller; CString csRemoteFile; CString csLocalFile; CComPtr<IDispatch> spCallback; public: void onDone(HRESULT hr); HRESULT Download(LPUNKNOWN caller, CString& csRemote, CString& csLocal, VARIANT callback); static DWORD __stdcall ThreadProc(void* param); }; // parts of CAsyncDownload.cpp void CAsyncDownload::onDone(HRESULT hr) { if(spCallback) { TRACE(TEXT("invoke callback function\n")); CComVariant vParams[1]; vParams[0] = "callback is working!"; DISPPARAMS params = { vParams, NULL, 1, 0 }; HRESULT hr = spCallback->Invoke(0, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &params, NULL, NULL, NULL); if(FAILED(hr)) { CString csBuffer; csBuffer.Format(TEXT("invoke failed, result value: %d \n"),hr); TRACE(csBuffer); }else { TRACE(TEXT("invoke was successful\n")); } } } HRESULT CAsyncDownload::Download(LPUNKNOWN caller, CString& csRemote, CString& csLocal, VARIANT callback) { CoInitializeEx(NULL, COINIT_MULTITHREADED); csRemoteFile = csRemote; csLocalFile = csLocal; pcaller = caller; switch(callback.vt){ case VT_DISPATCH: case VT_VARIANT:{ spCallback = callback.pdispVal; } break; default:{ spCallback = NULL; } } LPTHREAD_DATA pData = new THREAD_DATA; pData->pThis = this; // create monitor thread M HANDLE hThread = CreateThread(NULL, 0, ThreadProc, (void*)(pData), 0, NULL); if(!hThread) { delete pData; return HRESULT_FROM_WIN32(GetLastError()); } WaitForSingleObject(hThread, INFINITE); CloseHandle(hThread); CoUninitialize(); return S_OK; } DWORD __stdcall CAsyncDownload::ThreadProc(void* param) { LPTHREAD_DATA pData = (LPTHREAD_DATA)param; // here, we will create http download thread D // when download job is finish, call onDone method; pData->pThis->onDone(S_OK); delete pData; return 0; } **************** CODE FINISH **************** OK, above is parts of my source code, if I call onDone method in sub-thread, I will get OLE ERROR(-2147418113 (8000FFFF) Catastrophic failure.). Did I miss something? please help me to figure it out.

    Read the article

  • Twisted - how to create multi protocol process and send the data between the protocols

    - by SpankMe
    Hey, Im trying to write a program that would be listening for data (simple text messages) on some port (say tcp 6666) and then pass them to one or more different protocols - irc, xmpp and so on. I've tried many approaches and digged the Internet, but I cant find easy and working solution for such task. The code I am currently fighting with is here: http://pastebin.com/ri7caXih I would like to know how to from object like: ircf = ircFactory('asdfasdf', '#asdf666') get access to self protocol methods, because this: self.protocol.dupa1(msg) returns error about self not being passed to active protocol object. Or maybe there is other, better, easier and more kosher way to create single reactor with multiple protocols and have actions triggeres when a message arrives on any of them, and then pass that message to other protocols for handling/processing/sending? Any help will be highly appreciated!

    Read the article

  • Local Data Cache - How do I refresh the local db when I add fields to remote db?

    - by Chu
    I'm using a Local Data Cache in an ASP.NET 3.5 environment. I made a change in my main database by adding a new field. I double click on my .SYNC file in my project to startup the Local Data Cache wizard again. The wizard starts and I click OK with the hopes that it'll re-query my database and add the new field to the local database file. Instead, I get an error saying "Synchronizing the databae failed with the message: Unable to enumerate changes at the DbServerSyncProvider..." The only way I know to get things working again is to delete the .SYNC file along with the local database and start it from scratch. There's got to be an easier way... anyone know it?

    Read the article

  • Django Photologue - use photo with original compression

    - by 123
    hi, I´m uploading photos with Django Photologue. Is it possible to leave the jpgs as the are? Even if I tell photosize to use Highest Quality compression the files end up having half as many kb as the originals. I must admit that the visable loss of quality is small but as i am a photographer i would like the images to apear exactly as i edited them (photoshop). I don´t need any of photosize´s cropping and effects tools. Can it be turned off completely? thanks for your answers.

    Read the article

  • c# - SQL - speed up code to DB

    - by user228058
    I have a page with 26 sections - one for each letter of the alphabet. I'm retrieving a list of manufacturers from the database, and for each one, creating a link - using a different field in the Database. So currently, I leave the connection open, then do a new SELECT by each letter, WHERE the Name LIKE that letter. It's very slow, though. What's a better way to do this? TIA

    Read the article

  • how to have defined connection within function for pdo communication with DB

    - by Scarface
    hey guys I just started trying to convert my query structure to PDO and I have come across a weird problem. When I call a pdo query connection within a function and the connection is included outside the function, the connection becomes undefined. Anyone know what I am doing wrong here? I was just playing with it, my example is below. include("includes/connection.php"); function query(){ $user='user'; $id='100'; $sql = 'SELECT * FROM users'; $stmt = $conn->prepare($sql); $result=$stmt->execute(array($user, $id)); // now iterate over the result as if we obtained // the $stmt in a call to PDO::query() while($r = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "$r[username] $r[id] \n"; } } query();

    Read the article

  • How to read a file with variable multi-row data in Python

    - by dr.bunsen
    I have a file that is about 100Mb that looks like this: #meta data 1 skadjflaskdjfasljdfalskdjfl sdkfjhasdlkgjhsdlkjghlaskdj asdhfk #meta data 2 jflaksdjflaksjdflkjasdlfjas ldaksjflkdsajlkdfj #meta data 3 alsdkjflasdjkfglalaskdjf This file contains one row of meta data that corresponds to several, variable length data containing only alpha-numeric characters. What is the best way to read this data into a simple list like this: data = [[#meta data 1, skadjflaskdjfasljdfalskdjflsdkfjhasdlkgjhsdlkjghlaskdjasdhfk], [#meta data 2, jflaksdjflaksjdflkjasdlfjasldaksjflkdsajlkdfj], [#meta data 3, alsdkjflasdjkfglalaskdjf]] My initial idea was to use the read() method to read the whole file into memory and then use regular expressions to parse the data into the desired format. Is there a better more pythonic way? All metadata lines start with an octothorpe and all data lines are all alpha-numeric. Thanks!

    Read the article

  • Multi-account sync with Dropbox API

    - by Dan
    I'm trying to create a web app that lets users share files with each other through Dropbox. At the moment, Dropbox handles all the sharing, and there's one central Dropbox account running on the web server that shares the folder with the people who want it. I'm trying to change it so people don't have to accept a new folder invitation each time. I'd like to have them authorize my app to access an app folder in their Dropbox account, and all their shared folders would go inside there. Any changes they make would get noticed by the app on the server and synced to everyone else's folders. There's a couple things I'm having trouble figuring out to make this work: Do I need to make repeated calls to /delta for every account? I can't think how else I'd do this, but that sounds like it would quickly turn into thousands of requests a minute just polling for updates. When someone adds a file, do I have to upload it once for each account? That seems like a huge waste of bandwidth. I've looked into using /copy_ref, which I think would add a file to another user's account without my app ever touching it, but my app's web interface also allows users to upload files directly to my server, which would then need to be synced with everyone else's folders. That file isn't on Dropbox's servers yet, so /copy_ref obviously wouldn't work. For a little extra context, my app is written in node.js, and I've been playing with this library to interface with Dropbox, which uses their REST API.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >