Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 407/480 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Force download working, but showing invalid when trying to open locally.

    - by Cody Robertson
    Hi, I wrote this function and everything works well till i try to open the downloaded copy and it shows that the file is invalid. Here is my function function download_file() { //Check for download request: if(isset($_GET['file'])) { //Make sure there is a file before doing anything if(is_file($this->path . basename($_GET['file']))) { //Below required for IE: if(ini_get('zlib.output_compression')) { ini_set('zlib.output_compression', 'Off'); } //Set Headers: header('Pragma: public'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Last-Modified: ' . gmdate('D, d M Y H:i:s', $this->path . basename($_GET['file'])) . ' GMT'); header('Content-Type: application/force-download'); header('Content-Disposition: inline; filename="' . basename($_GET['file']) . '"'); header('Content-Transfer-Encoding: binary'); header('Content-Length: ' . filesize($this->path . basename($_GET['file']))); header('Connection: close'); readfile($this->path . basename($_GET['file'])); exit(); } } }

    Read the article

  • ASP.Net MVC - how can I easily serialize query results to a database?

    - by Mortanis
    I've been working on a little property search engine while I learn ASP.Net MVC. I've gotten the results from various property database tables and sorted them into a master generic property response. The search form is passed via Model Binding and works great. Now, I'd like to add pagination. I'm returning the chunk of properties for the current page with .Skip() and .Take(), and that's working great. I have a SearchResults model that has the paged result set and various other data like nextPage and prevPage. Except, I no longer have the original form of course to pass to /Results/2. Previously I'd have just hidden a copy of the form and done a POST each time, but it seems inelegant. I'd like to serialize the results to my MS SQL database and return a unique key for that results set - this also helps with a "Send this query to a friend!" link. Killing two birds with one stone. Is there an easy way to take an IQueryable result set that I have, serialize it, stick it into the DB, return a unique key and then reverse the process with said key? I'm using Linq to SQL currently on a MS SQL Express install, though in production it'll be on MS SQL 2008.

    Read the article

  • Extremely CPU Intensive Alarm Clock

    - by SoulBeaver
    For some reason my program, a console alarm clock I made for laughs and practice, is extremely CPU intensive. It consumes about 2mB RAM, which is already quite a bit for such a small program, but it devastates my CPU with over 50% resources at times. Most of the time my program is doing nothing except counting down the seconds, so I guess this part of my program is the one that's causing so much strain on my CPU, though I don't know why. If it is so, could you please recommend a way of making it less, or perhaps a library to use instead if the problem can't be easily solved? /* The wait function waits exactly one second before returning to the * * called function. */ void wait( const int &seconds ) { clock_t endwait; // Type needed to compare with clock() endwait = clock() + ( seconds * CLOCKS_PER_SEC ); while( clock() < endwait ) {} // Nothing need be done here. } In case anybody browses CPlusPlus.com, this is a genuine copy/paste of the clock() function they have written as an example for clock(). Much why the comment //Nothing need be done here is so lackluster. I'm not entirely sure what exactly clock() does yet. The rest of the program calls two other functions that only activate every sixty seconds, otherwise returning to the caller and counting down another second, so I don't think that's too CPU intensive- though I wouldn't know, this is my first attempt at optimizing code. The first function is a console clear using system("cls") which, I know, is really, really slow and not a good idea. I will be changing that post-haste, but, since it only activates every 60 seconds and there is a noticeable lag-spike, I know this isn't the problem most of the time. The second function re-writes the content of the screen with the updated remaining time also only every sixty seconds. I will edit in the function that calls wait, clearScreen and display if it's clear that this function is not the problem. I already tried to reference most variables so they are not copied, as well as avoid endl as I heard that it's a little slow compared to \n.

    Read the article

  • Would an immutable keyword in Java be a good idea?

    - by berry120
    Generally speaking, the more I use immutable objects in Java the more I'm thinking they're a great idea. They've got lots of advantages from automatically being thread-safe to not needing to worry about cloning or copy constructors. This has got me thinking, would an "immutable" keyword go amiss? Obviously there's the disadvantages with adding another reserved word to the language, and I doubt it will actually happen primarily for the above reason - but ignoring that I can't really see many disadvantages. At present great care has to be taken to make sure objects are immutable, and even then a dodgy javadoc comment claiming a component object is immutable when it's in fact not can wreck the whole thing. There's also the argument that even basic objects like string aren't truly immutable because they're easily vunerable to reflection attacks. If we had an immutable keyword the compiler could surely recursively check and give an iron clad guarantee that all instances of a class were immutable, something that can't presently be done. Especially with concurrency becoming more and more used, I personally think it'd be good to add a keyword to this effect. But are there any disadvantages or implementation details I'm missing that makes this a bad idea?

    Read the article

  • Omit return type in C++0x

    - by Clinton
    I've recently found myself using the following macro with gcc 4.5 in C++0x mode: #define RETURN(x) -> decltype(x) { return x; } And writing functions like this: template <class T> auto f(T&& x) RETURN (( g(h(std::forward<T>(x))) )) I've been doing this to avoid the inconvenience having to effectively write the function body twice, and having keep changes in the body and the return type in sync (which in my opinion is a disaster waiting to happen). The problem is that this technique only works on one line functions. So when I have something like this (convoluted example): template <class T> auto f(T&& x) -> ... { auto y1 = f(x); auto y2 = h(y1, g1(x)); auto y3 = h(y1, g2(x)); if (y1) { ++y3; } return h2(y2, y3); } Then I have to put something horrible in the return type. Furthermore, whenever I update the function, I'll need to change the return type, and if I don't change it correctly, I'll get a compile error if I'm lucky, or a runtime bug in the worse case. Having to copy and paste changes to two locations and keep them in sync I feel is not good practice. And I can't think of a situation where I'd want an implicit cast on return instead of an explicit cast. Surely there is a way to ask the compiler to deduce this information. What is the point of the compiler keeping it a secret? I thought C++0x was designed so such duplication would not be required.

    Read the article

  • my dialog box did not show up whene i compile it using sdk 7.1,

    - by zirek
    hello and welcom everyone .. i'd like to build a win32 application using sdk 7.1, i create the dialog box using visual c++ 2012 resource editor, i copy resource.rc and resource.h to my folder and i write this simple main.cpp file: #include <windowsx.h> #include <Windows.h> #include <tchar.h> #include "resource.h" #define my_PROCESS_MESSAGE(hWnd, message, fn) \ case(message): \ return( \ SetDlgMsgResult(hWnd, uMsg, \ HANDLE_##message((hWnd), (wParam), (lParam), (fn)) )) \ LRESULT CALLBACK DlgProc(HWND, UINT, WPARAM, LPARAM); BOOL Cls_OnInitDialog(HWND hwnd, HWND hwndFocus, LPARAM lParam); void Cls_OnCommand(HWND hwnd, int id, HWND hwndCtl, UINT codeNotify); int WINAPI _tWinMain( HINSTANCE hInstance, HINSTANCE, LPTSTR, int iCmdLine ) { DialogBoxParam( hInstance, MAKEINTRESOURCE(IDD_INJECTOR), NULL, (DLGPROC) DlgProc, NULL ); return FALSE; } LRESULT CALLBACK DlgProc( HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam ) { switch (uMsg) { my_PROCESS_MESSAGE(hwnd, WM_INITDIALOG, Cls_OnInitDialog); my_PROCESS_MESSAGE(hwnd, WM_COMMAND, Cls_OnCommand); default: break; } return DefWindowProc(hwnd, uMsg, wParam, lParam); } BOOL Cls_OnInitDialog(HWND hwnd, HWND hwndFocus, LPARAM lParam) { return TRUE; } void Cls_OnCommand(HWND hwnd, int id, HWND hwndCtl, UINT codeNotify) { switch(id) { case IDCANCEL: EndDialog(hwnd, id); break; default: break; } } then i use the following command line to compile my code, wich i found on this forum cl main.cpp /link /SUBSYSTEM:WINDOWS user32.lib my problem is that my dialog box did not show up, and when i use procexp, to see what happen, i found that that my application is created then closed in the same time, and what make me wondering is that its working fine on visual c++ 2012. my sdk 7.1, installed correctly, i testing it against a basic window without any resource file any ideas, ill be really thankful Best, Zirek

    Read the article

  • Klazuka/Kal incorporation issue

    - by user1292943
    I'm having a little issue. I am trying to incorporate the Klazuka/Kal project into my project. I've done the following: I added the Kal.xcodeproj and all files to my project Under Build Phases, I've added Kal to "Target Dependencies" Under Build Phases, I've added libKal.a under "Link Binary With Libraries" Under Build Phases, I've added Kal.bundle to "Copy Bundle Resources" Under Build Settings, I've added "“$(BUILT_PRODUCTS_DIR)” (or “$(BUILT_PRODUCTS_DIR)/static_library_name”" for "Header Search Paths" and "User Header Search Paths". Under Build Settings, I've added the path to Kal under "Library Search Paths" Under Build Settings, I've added -ObjC, -all_load, and -force_load under "Other Linker Flags" I've edited my Build Scheme and list the Kal Target prior to my main application target with Analyze, Test, Run, Profile, and Archive all checked. I've attempted to follow the steps from here on Stack Overflow: iphone: Kal calendar not running in xcode 4.2 and here: Trying to integrate a Calendar library that was built for versions of iOS before iOS5 into my new project in XCode 4 using iOS5 - How to port? and here: I added my project and Kal Calendar's project in a workspace, still won't work in Xcode 4 and also on this site: http://blog.carbonfive.com/2011/04/04/using-open-source-static-libraries-in-xcode-4/#configuring_the_projects_scheme I try to import the "Kal.h" file but am getting a File Not Found error when I try to build. I'm obviously missing something, just not sure what. Can anyone please help? Thanks for any assistance!!

    Read the article

  • Java Collection performance question

    - by Shervin
    I have created a method that takes two Collection<String> as input and copies one to the other. However, I am not sure if I should check if the collections contain the same elements before I start copying, or if I should just copy regardless. This is the method: /** * Copies from one collection to the other. Does not allow empty string. * Removes duplicates. * Clears the too Collection first * @param target * @param dest */ public static void copyStringCollectionAndRemoveDuplicates(Collection<String> target, Collection<String> dest) { if(target == null || dest == null) return; //Is this faster to do? Or should I just comment this block out if(target.containsAll(dest)) return; dest.clear(); Set<String> uniqueSet = new LinkedHashSet<String>(target.size()); for(String f : target) if(!"".equals(f)) uniqueSet.add(f); dest.addAll(uniqueSet); } Maybe it is faster to just remove the if(target.containsAll(dest)) return; Because this method will iterate over the entire collection anyways.

    Read the article

  • Fixing merge conflicts?

    - by user291701
    I have two remote branches, "grape" and "master". I'm currently on "grape". Now I switch to "master": git checkout master Now I want to pull all changes from "grape" into "master" - is this the way to do it?: git merge origin grape It's my understanding that git will then pull all the current state of the remote branch "grape" into my local copy of "master". It will try to auto-merge for me. If there are conflicts, the files in conflict will have some conflict text actually injected into the file. I then have to go into those files, and delete the chunk I don't want (essentially telling git how to merge these files). For each file in conflict, do I add and commit the changes again?: git add problemfile1.txt git commit -m "Fixed merge conflict." git add problemfile2.txt git commit -m "Fixed another merge conflict." ... after I've fixed all the merge conflicts like above, do I just push to "master" again to finish up the process?: git push origin master or is there something else we need to do when we get into this conflict state? Thank you

    Read the article

  • Modify existing struct alignment in Visual C++

    - by Crend King
    Is there a way to modify the member alignment of an existing struct in Visual C++? Here is the background: I use an 3rd-party library, which uses several structs. To fill up the structs, I pass the address of the struct instance to some functions. Unfortunately, the functions only returns unaligned buffer, so that data of some members are always wrong. /Zp is out of choice, since it breaks the other parts of the program. I know #pragma pack modifies the alignment of the following struct, but I do not want to copy the structs into my code, for the definitions in the library might change in the future. Sample code: test.h: struct am_aligned { BYTE data1[10]; ULONG data2; }; test.cpp: include "test.h" // typedef alignment(1) struct am_aligned am_unaligned int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { char buffer[20] = {}; for (int i = 0; i < sizeof(unaligned_struct); i++) { buffer[i] = i; } am_aligned instance = *(am_aligned*) buffer; return 0; } instance.data2 is 0x0f0e0d0c, while 0x0d0c0b0a is desired. The commented line does not work of course. Thanks for help!

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • VS2010 - Using <Import /> to share properties between setup projects?

    - by arex1337
    Why doesn't it work to <Import /> this file, when it works when I replace the statement with just copy-pasting the three properties? ../../Setup.Version.proj <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <InstallerMajorVersion>7</InstallerMajorVersion> <InstallerMinorVersion>7</InstallerMinorVersion> <InstallerBuildNumber>7</InstallerBuildNumber> </PropertyGroup> </Project> Works: <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <InstallerMajorVersion>7</InstallerMajorVersion> <InstallerMinorVersion>7</InstallerMinorVersion> <InstallerBuildNumber>7</InstallerBuildNumber> <OutputName>asdf-$(InstallerMajorVersion).$(InstallerMinorVersion).$(InstallerBuildNumber)</OutputName> <OutputType>Package</OutputType> Doesn't work: <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="../../Setup.Version.proj" /> <PropertyGroup> <OutputName>asdf-$(InstallerMajorVersion).$(InstallerMinorVersion).$(InstallerBuildNumber)</OutputName> <OutputType>Package</OutputType> Here the variables just evaulate to empty strings... :( I'm certain the path to the imported project is correct.

    Read the article

  • OpenCV. cvFnName() works, but cv::FunName() doesn't work

    - by Innuendo
    I'm using OpenCV to write a plugin for a simulator. I've made an OpenCV project (single - not a plugin) and it works fine. When I added OpenCV libs to the plugin project, I added all libs required. Visual Studio 2010 doesn't highlight any code line with red. All looks fine and compiles fine. But in execution, the program halts with a Runtime Error on any cv::function. For example: cv::imread, or cv::imwrite. But if I replace them with cvLoadImage() and cvSaveImage(), it works fine. Why does this happen? I don't want to rewrite the whole script in old-api-style (cvFnName). It means I should change all Mat objects to IplImages, and so on. UPDATE: // preparing template ifstream ifile(tmplfilename); if ( !FILE_LOADED && ifile ) { // loading template file Mat tmpl = cv::imread(tmplfilename, 1); // << here occurs error FILE_LOADED = true; } Mat src; Bmp2Mat(hDC, hBitmap, src); TargetDetector detector(src, tmpl); detector.detectTarget(); If I change to: if ( !FILE_LOADED && ifile ) { IplImage* tmpl = 0; tmpl = cvLoadImage(tmplfilename, 1); // no error occurs } And then no error occurs. Early it displayed some Runtime Error. Now, I wanted to copy exact message and it just crashes the application (simulator, what I am pluginning). It displays window error - to kill process or no. (I can't show exact message, because I'm using russian windows now)

    Read the article

  • Help me understand WebDAV and Autoversioning

    - by Malfist
    I just read the WebDAV Appendex in the O'Reilly Subversion book. I don't quite understand it. It talked about users being able to "mount" WebDAV directories (trees) and manipulate the files like they would normally and on saves the server would automagically create a new revision. The way it explained it, it sounded like it would work for any program, but then at the end of the appendix, it listed a series of programs that worked with WebDAV servers, which leads me to think that maybe it doesn't work like it originally described it. My question is this: How exactly do you interact with a WebDAV repository? Can I do this for example: Copy a file locally via ftp, edit it with notepad++, and then upload it via ftp to the server and have the server do a commit and create a new revision with the file I just edited and uploaded. Also, if that is possible, what happens if two people edit the file locally (on their machines) and uploaded two reversions to the server? With webDAV will I be able to replace Dreamweaver's "Oops, someone edited this before you" with simple ftp uploads and subversion conflict resolutions?

    Read the article

  • How to filter node list based on the contents of another node list

    - by ~otakuj462
    Hi, I'd like to use XSLT to filter a node list based on the contents of another node list. Specifically, I'd like to filter a node list such that elements with identical id attributes are eliminated from the resulting node list. Priority should be given to one of the two node lists. The way I originally imagined implementing this was to do something like this: <xsl:variable name="filteredList1" select="$list1[not($list2[@id_from_list1 = @id_from_list2])]"/> The problem is that the context node changes in the predicate for $list2, so I don't have access to attribute @id_from_list1. Due to these scoping constraints, it's not clear to me how I would be able to refer to an attribute from the outer node list using nested predicates in this fashion. To get around the issue of the context node, I've tried to create a solution involving a for-each loop, like the following: <xsl:variable name="filteredList1"> <xsl:for-each select="$list1"> <xsl:variable name="id_from_list1" select="@id_from_list1"/> <xsl:if test="not($list2[@id_from_list2 = $id_from_list1])"> <xsl:copy-of select="."/> </xsl:if> </xsl:for-each> </xsl:variable> But this doesn't work correctly. It's also not clear to me how it fails... Using the above technique, filteredList1 has a length of 1, but appears to be empty. It's strange behaviour, and anyhow, I feel there must be a more elegant approach. I'd appreciate any guidance anyone can offer. Thanks.

    Read the article

  • how does Cocoa compare to Microsoft, Qt?

    - by Paperflyer
    I have done a few months of development with Qt (built GUI programatically only) and am now starting to work with Cocoa. I have to say, I love Cocoa. A lot of the things that seemed hard in Qt are easy with Cocoa. Obj-C seems to be far less complex than C++. This is probably just me, so: Ho do you feel about this? How does Cocoa compare to WPF (is that the right framework?) to Qt? How does Obj-C compare to C# to C++? How does XCode/Interface Builder compare to Visual Studio to Qt Creator? How do the Documentations compare? For example, I find Cocoa's Outlets/Actions far more useful than Qt's Signals and Slots because they actually seem to cover most GUI interactions while I had to work around Signals/Slots half the time. (Did I just use them wrong?) Also, the standard templates of XCode give me copy/paste, undo/redo, save/open and a lot of other stuff practically for free while these were rather complex tasks in Qt. Please only answer if you have actual knowledge of at least two of these development environments/frameworks/languages.

    Read the article

  • C# to unmanaged dll data structures interop

    - by Shane Powell
    I have a unmanaged DLL that exposes a function that takes a pointer to a data structure. I have C# code that creates the data structure and calls the dll function without any problem. At the point of the function call to the dll the pointer is correct. My problem is that the DLL keeps the pointer to the structure and uses the data structure pointer at a later point in time. When the DLL comes to use the pointer it has become invalid (I assume the .net runtime has moved the memory somewhere else). What are the possible solutions to this problem? The possible solutions I can think of are: Fix the memory location of the data structure somehow? I don't know how you would do this in C# or even if you can. Allocate memory manually so that I have control over it e.g. using Marshal.AllocHGlobal Change the DLL function contract to copy the structure data (this is what I'm currently doing as a short term change, but I don't want to change the dll at all if I can help it as it's not my code to begin with). Are there any other better solutions?

    Read the article

  • php code in FTP to backup db using URL

    - by Giom
    I fund and inserted this code in my FTP in a backup.php that works great but it copy the file into the root folder (homez/) 1- I would like to put these backup files in a homez/backupDB any idea where to put this path? 2- the db backup files (DB.sql.bz2) have always the same name, is it possible to name each one with the date of creation? (I launch this php with the URL link) <? echo "Votre base est en cours de sauvegarde....... "; $db="nom_de_ma_base"; $status=system("mysqldump --host=mysql5-1.perso --user=$_POST[login] --password=$_POST[password] $db > ../$db.sql"); echo $status; echo "Compression du fichier..... "; system("bzip2 -f ../$db.sql"); echo "C'est fini. Vous pouvez récupérer la base par FTP \n "; ?> Thanks!

    Read the article

  • Output problem in mysql query in MFC program

    - by D.Gaughan
    Im currently working on a small MFC program that outputs data from a mysql database. I can get output when im using an sql statement that does not contain any variable eg. select album from Artists; but when i try to use a variable the program compiles but i get no output eg. mysql_perform_query(conn,select album from Artists where artists = '"+m_search_edit"'") Here is the function for mysql_perform_query: MYSQL_RES* mysql_perform_query(MYSQL *conn, const char* query) { // send the query to the database if (mysql_query(conn, query)) { // printf("MySQL query error : %s\n", mysql_error(conn)); // exit(1); } return mysql_use_result(conn); } And here is the code block for outputting the data: struct connection_details mysqlD; mysqlD.server = "www.freesqldatabase.com"; // where the mysql database is mysqlD.user = "**********"; // the root user of mysql mysqlD.password = "***********"; // the password of the root user in mysql mysqlD.database = "***************"; // the databse to pick // connect to the mysql database conn = mysql_connection_setup(mysqlD); CStringA query; query.Format("select album from Artists where artist = '%s'", CT2CA(m_search_edit)); res = mysql_perform_query(conn, query); //res = mysql_perform_query (conn, "select distinct artist from Artists"); while((row = mysql_fetch_row(res)) != NULL){ CString str; UpdateData(); str = ("%s\n", row[0]); UpdateData(FALSE); m_list_control.AddString(str); } The m_search_edit variable is the variable for an edit box. I am using Visual Studio 2008 with one copy of this program unicode and one nonunicode, I also have a version built with VC++ 6. Any tips on how I can get output from the databse using the m_search_edit variable??

    Read the article

  • Creating a HTTP handler for IIS that transparently forwards request to different port?

    - by Lasse V. Karlsen
    I have a public web server with the following software installed: IIS7 on port 80 Subversion over apache on port 81 TeamCity over apache on port 82 Unfortunately, both Subversion and TeamCity comes with their own web server installations, and they work flawlessly, so I don't really want to try to move them all to run under IIS, if that is even possible. However, I was looking at IIS and I noticed the HTTP redirect part, and I was wondering... Would it be possible for me to create a HTTP handler, and install it on a sub-domain under IIS7, so that all requests to, say, http://svn.vkarlsen.no/anything/here is passed to my HTTP handler, which then subsequently creates a request to http://localhost:81/anything/here, retrieves the data, and passes it on to the original requestee? In other words, I would like IIS to handle transparent forwards to port 81 and 82, without using the redirection features. For instance, Subversion doesn't like HTTP redirect and just says that the repository has been moved, and I need to relocate my working copy. That's not what I want. If anyone thinks this can be done, does anyone have any links to topics I need to read up on? I think I can manage the actual request parts, even with authentication, but I have no idea how to create a HTTP handler. Also bear in mind that I need to handle sub-paths and documents beneath the top-level domain, so http://svn.vkarlsen.no/whatever/here needs to be handled by a single handler, I cannot create copies of the handler for all sub-directories since paths are created from time to time.

    Read the article

  • More efficient R / Sweave / TeXShop work-flow?

    - by user594795
    I've now got everything to work properly on my Mac OS X 10.6 machine so that I can create decent looking LaTeX documents with Sweave that include snippets of R code, output, and LaTeX formatting together. Unfortunately, I feel like my work-flow is a bit clunky and inefficient: Using TextWrangler, I write LaTeX code and R code (surrounded by <<= above and @ below R code chunk) together in one .Rnw file. After saving changes, I call the .Rnw file from R using the Sweave command Sweave(file="/Users/mymachine/Documents/Assign4.Rnw", syntax="SweaveSyntaxNoweb") In response, R outputs the following message: You can now run LaTeX on 'Assign4.tex' So then I find the .tex file (Assign4.tex) in the R directory and copy it over to the folder in my documents ~/Documents/ where the .Rnw file is sitting (to keep everything in one place). Then I open the .tex file (e.g. Assign4.tex) in TeXShop and compile it there into pdf format. It is only at this point that I get to see any changes I have made to the document and see if it 'looks nice'. Is there a way that I can compile everything with one button click? Specifically it would be nice to either call Sweave / R directly from TextWrangler or TeXShop. I suspect it might be possible to code a script in Terminal to do it, but I have no experience with Terminal. Please let me know if there's any other things I can do to streamline or improve my work flow.

    Read the article

  • Parse large XML file w/ script or use BioPython API ?

    - by jeremy04
    Hey guys this is my first question on here. I'm trying to make a local copy of the UniprotKB in SQL. The UniprotKB is 2.1GB, and it comes in XML and a special text format used by SwissProt Here are my options: 1) Use a SAX parser (XML) - I chose Ruby, and Nokogiri. I started writing the parser, but my initial reaction: how would I map the XML schema to the SAX parser? 2) BioPython - I already have BioSQL/Biopython installed, which literally created my SQL schema for me, and I was able to successfully insert one SwissProt/Uniprot txt file into the database. I'm running it right now (crosses fingers) on the entire 2.1gb. Here is the code I'm running: from Bio import SeqIO from BioSQL import BioSeqDatabase from Bio import SwissProt server = BioSeqDatabase.open_database(driver = "MySQLdb", user = "root", passwd = "", host="localhost", db = "bioseqdb") db = server["uniprot"] iterator = SeqIO.parse(open("/path/to/uniprot_sprot.dat", "r"), "swiss") db.load(iterator) server.commit() Edit: it's now crashing because the transactions are getting locked (since the tables are Innodb) Error Number: 1205 Lock wait timeout exceeded; try restarting transaction. I'm using MySQL version: 5.1.43 Should I switch my database to Postgrelsql ?

    Read the article

  • C++ Storing variables and inheritance

    - by Kaa
    Hello Everyone, Here is my situation: I have an event driven system, where all my handlers are derived from IHandler class, and implement an onEvent(const Event &event) method. Now, Event is a base class for all events and contains only the enumerated event type. All actual events are derived from it, including the EventKey event, which has 2 fields: (uchar) keyCode and (bool)isDown. Here's the interesting part: I generate an EventKey event using the following syntax: Event evt = EventKey(15, true); and I ship it to the handlers: EventDispatch::sendEvent(evt); // void EventDispatch::sendEvent(const Event &event); (EventDispatch contains a linked list of IHandlers and calls their onEvent(const Event &event) method with the parameter containing the sent event. Now the actual question: Say I want my handlers to poll the events in a queue of type Event, how do I do that? x Dynamic pointers with reference counting sound like too big of a solution. x Making copies is more difficult than it sounds, since I'm only receiving a reference to a base type, therefore each time I would need to check the type of event, upcast to EventKey and then make a copy to store in a queue. Sounds like the only solution - but is unpleasant since I would need to know every single type of event and would have to check that for every event received - sounds like a bad plan. x I could allocate the events dynamically and then send around pointers to those events, enqueue them in the array if wanted - but other than having reference counting - how would I be able to keep track of that memory? Do you know any way to implement a very light reference counter that wouldn't interfere with the user? What do you think would be a good solution to this design? I thank everyone in advance for your time. Sincerely, Kaa

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • NoClassDefFoundError when trying to reference external jar files

    - by opike
    I have some 3rd party jar files that I want to reference in my tomcat web application. I added this line to catalina.properties: shared.loader=/home/ollie/dev/java/googleapi_samples/gdata/java/lib/*.jar but I'm still getting this error: org.apache.jasper.JasperException: javax.servlet.ServletException: java.lang.NoClassDefFoundError: com/google/gdata/util/ServiceException org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:491) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:401) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) I verified that the com.google.gdata.util.ServiceException is in the gdata-core-1.0.jar file which is in the directory: /home/ollie/dev/java/googleapi_samples/gdata/java/lib I did bounce tomcat after I modified catalina.properties. Update 1: I tried copying the gdata-core-1.0.jar file into /var/lib/tomcat6/webapp/examples/WEB-INF/lib as a test but that didn't fix the problem either. Update 2: It actually does work when I copy the jar file directly into the WEB-INF/lib directory. There was a permissions issue that I had to resolve. But it's still not working when I use the shared.loader setting. I reconfirmed that the path is correct.

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >