Search Results

Search found 4458 results on 179 pages for 'individual improvement'.

Page 113/179 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • How do I specify (1) an order and (2) a meaninful string representation for users in my Django application?

    - by David Faux
    I have a Django application with users. I have a model called "Course" with a foreign key called "teacher" to the default User model that Django provides: class Course(models.Model): ... teacher = models.ForeignKey(User, related_name='courses_taught') When I create a model form to edit information for individual courses, the possible users for the teacher field appear in this long select menu of user names. These users are ordered by ID, which is of meager use to me. How can I order these users by their last names? change the string representation of the User class to be "Firstname Lastname (username)" instead of "username"?

    Read the article

  • Would this union work if char had stricter alignment requirements than int?

    - by paxdiablo
    Recently I came across the following snippet, which is an attempt to ensure all bytes of i (nad no more) are accessible as individual elements of c: union { int i; char c[sizeof(int)]; }; Now this seems a good idea, but I wonder if the standard allows for the case where the alignment requirements for char are more restrictive than that for int. In other words, is it possible to have a four-byte int which is required to be aligned on a four-byte boundary with a one-byte char (it is one byte, by definition, see below) required to be aligned on a sixteen-byte boundary? And would this stuff up the use of the union above? Two things to note. I'm talking specifically about what the standard allows here, not what a sane implementor/architecture would provide. I'm using the term "byte" in the ISO C sense, where it's the width of a char, not necessarily 8 bits.

    Read the article

  • rsync useful w/ encrypted files?

    - by barrycarter
    Is rsync efficient for transferring encrypted files? More specifically: I encrypt 'x' with my public key and call the result 'y'. I rsync 'y' to my backup server. 'x' changes slightly I encrypt the modified 'x' and rsync the modified 'y' to my backup server. Is this efficient? I know a small change in 'x' yields a large change in 'y', but is the change localized? Or has 'y' changed so thoroughly that rsync is not much better than scp? I currently backup my "critical" files by tarring/bzipping them nightly, then encrypting the .tar.bz file and rsync'ing it to my backup server. Many of the individual files don't change, but, of course, the tar file changes if even one of the files change. Is this efficient? Should I be encrypting and backing up each file individually? That way, unchanged files will take no time to rsync.

    Read the article

  • std::ifstream buffer caching

    - by ledokol
    Hello everybody, In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer? added My current code looks like that (pseudocode) // this is done in loop int i1 = input1.read_integer(); int i2 = input2.read_integer(); if (!input1.eof() && !input2.eof()) { if (i1 < i2) { output.write(i1); input2.seek_back(sizeof(int)); } else input1.seek_back(sizeof(int)); output.write(i2); } } else { if (input1.eof()) output.write(i2); else if (input2.eof()) output.write(i1); } What I don't like here is seek_back - I have to seek back to previous position as there is no way to peek 4 bytes too much reading from file if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal. Can you suggest improvement for that? Thanks.

    Read the article

  • efficient video format/codec for sparse & binary blob tracking

    - by user391339
    I am working on a blob tracking project and have many high-definition videos that I would like to reduce in size for storage and downstream tracking/shape-analysis. I want to use a lossless method that takes advantage of the black and white nature of the video as well as the fact that not much is moving between individual frames. The videos are quite sparse, with 5 to 10 b&w blobs per frame occupying <30% of the space in total, with each blob moving <5-10% of the field of view between frames and not changing shape too much between 2-3 frames. I will work in Python, Matlab, or LabView for this project, and could use a batch utility if available. It may be worthwhile to export the files as compressed image stacks if a proper video format can't be found. What are the pros and cons of this? A video codec uses correlations between neighboring frames, so it should be more efficient, but not if the wrong one is chosen or if it is improperly configured.

    Read the article

  • Do I have to duplicate this function? - jQuery

    - by Josh
    I'm using this function to create an transparent overlay of information over the current div for a web-based mobile app. Background: using jQTouch, so I have separate divs, not individual pages loading new. $(document).ready(function() { $('.infoBtn').click(function() { $('#overlay').toggleFade(400); return false; }); }); Understanding that JS will run sequentially when i click the button on the first div the function works fine. When I go to the next div if I click the same button nothing "happens" when this div is displayed, but if i go back to the first div it has actually triggered it on this page. So I logically duplicated the function and changed the CSS selector names and it works for both. But do I have to do this for each use? Is there a way to use the same selectors, but load the different content in each variation?

    Read the article

  • Implementing a 'many-to-many' database

    - by Raven Dreamer
    Greetings, stack*overflow* In my database, I already have one table, 'contacts' that contains records of individual people. I also have several other tables in my database which represent "skill sets" that contain records denoting a particular skill. 1) Am I correct in plotting this as a "many-to-many" relationship? (each contact can have multiple skill sets, and each skill set can belong to multiple contacts) 2) I'm new to databases -- do I want to link the tables? 3) Is there a way to implement this in my program (C# + windows forms) such that for any given record in the 'contacts' table, either the names of all associated 'skill set' tables or all the 'skill' records associated with the 'contact' record could be retrieved? (Database is located on SQL Server Express 2008)

    Read the article

  • How to avoid web server traffic peak resulting from iOS Newsstand app receiving a remote notification?

    - by thomers
    I'm developing an iOS Newsstand app. If it is suspended or not running and connected to a WLAN, Newsstand apps can be triggered by a Push remote notification to download the latest issue (in our case around 100MB) in the background. I'm using Urban Airship for the delivery of the Push broadcast. I'm now worrying about many many iOS devices hitting the web server for one big download more or less at the same time, because I expect the majority of the devices will receive the notification in a very short timeframe. Instead of broadcasts to all devices, should I rather send individual notifications to batches of small groups of devices, spreading them out over a longer period of time? And/or would a CDN like Amazon Cloudfront solve that issue easier/anyway?

    Read the article

  • Searching in Rails 3

    - by cbarton
    Here is the situation: I have the need to search for specifics or generals, much like expedia.com or your local library catalog search. In that there are fields that may or may not be filled out based on the individual need. It acts much like an "advanced search page". I have all of that code done for the models/controller and such and have it setup with a Searches controller along with matching model and views. The question that I have is there a way to cut down the amount of parameters posted to GET so that the url isn't search?search[:a]=""+search[:b]=""+search[:c]="x"... plus the unicode checkmark. I only want the filled out fields to be shown in the response link especially minus the checkmark. cbarton

    Read the article

  • Equal height divs with footer always at bottom of tallest div

    - by nukegara
    I was wondering if this were possible, and if it is the best way to go about this: example image (since I'm not allowed to post pics yet ^^) So, not only does each column have to be of equal height, but each column also has its own individual footer. I saw this post httx://stackoverflow.com/questions/1347366/div-at-bottom-of-window-and-adaptable-height-div (I also can't post more than one link....) - how could I rework this technique to apply to the bottom of the divs and not the bottom of the window? *edit - each column will have content that will constantly change and be of variable height. I'm thinking I could just figure out the equal height columns first, then just absolute position a footer div within those columns. Does its parent div then have to be position: relative?

    Read the article

  • How to implement copy protection of content in an open source application?

    - by Lococo
    I have an idea for an open source app -- the app would be free, but I would charge a small fee for data that a customer would order. For instance, let's say I'm writing a map application. I'd give the app away, make it open-source, but I would like to sell various maps to individual users. Is there a way to protect the data in such a way that makes it very difficult for someone to simply take the map they bought and distribute it to others? Is this feasible for an open source app?

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • A gridview for debugging collections?

    - by Llewellyn
    Hi Guys, I'm working with fairly large generic collections and I need a quick way to view all the items and their properties in these collections while debugging. When I say view all the items, I mean I would like to view the collection as if it was bound to say.. a gridview. That was I could see all all the item properties listed. Currently VS2010 displays the collection object during debugging, but it takes several clicks before I can view any item's properties within the collection. As I'm using collection with 50 to 100 items in them, I've having a hard time getting a feel for the collection data because of having to click through to view each individual item's properties during debugging. Do you have any ideas or know of a visual studio plugin that can help display collections in a table format or gridview format? Thanks for you time

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Adding links to this PHP statement

    - by poindexter
    Here is a PHP statement that basically sets off a javascript image slider. Only problem is I can't figure out how to get each individual image to a link to a different page. Any tips or suggestions? I appreciate it! <?php jsbrotate('height=316&width=924&imgdisp=4&imgfade=2&images=/wp-content/uploads/image_1_linked.png|/wp-content/uploads/image_2.png|/wp-content/uploads/image_3.png|/wp-content/uploads/image_4.png|/wp-content/uploads/image_5.png'); ?>

    Read the article

  • Reverse engineering a custom data file

    - by kerchingo
    At my place of work we have a legacy document management system that for various reasons is now unsupported by the developers. I have been asked to look into extracting the documents contained in this system to eventually be imported into a new 3rd party system. From tracing and process monitoring I have determined that the document images (mainly tiff files) are stored in a number of 1.5GB files. These files seem to be read from a specific offset and then written to a tmp file that is then served via a web app to the client, and then deleted. I guess I am looking for suggestions as to how I can inspect these large files that contain the tiff images, and eventually extract and write them to individual files.

    Read the article

  • How do I efficiently parse a CSV file in Perl?

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively. edit It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan) another edit Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory. yet another edit I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

    Read the article

  • Where and how to store functions in asp.net.vb

    - by si
    Hi. I am using visual studio 2008, asp.net.vb I have various functions; 1 that calculates totals, 1 that calculates averages, 1 that calculates percentages etc. The data they act upon is different depending on the particular web page, but all my pages use these functions. So I want to know how I can store these functions in a single place so that all of my web pages can access the routines. At the moment all I know how to do is to write seperate functions in the code-behind for each individual web page. Obviously that is just so long winded and inefficient, but try as I might I can't figure out how to do it better. Please can anyone help. Many thanks.

    Read the article

  • Covariant return types in Java enums

    - by Kelvin Chung
    As mentioned in another question on this site, something like this is not legal: public enum MyEnum { FOO { public Integer doSomething() { return (Integer) super.doSomething(); } }, BAR { public String doSomething() { return (String) super.doSomething(); } }; public Object doSomething(); } This is due to covariant return types apparently not working on enum constants (again breaking the illusion that enum constants are singleton subclasses of the enum type...) So, how about we add a bit of generics: is this legal? public enum MyEnum2 { FOO { public Class<Integer> doSomething() { return Integer.class; } }, BAR { public Class<String> doSomething() { return String.class; } }; public Class<?> doSomething(); } Here, all three return Class objects, yet the individual constants are "more specific" than the enum type as a whole...

    Read the article

  • Find similar or "like" text and replace it with other in excel

    - by andreas
    Does anyone know how i can find similar descriptions in excel and replace them with 1 other description is there a wild card? i am. trying to make a pivot chart with a list of transactions and their descriptions and i want to group all my ATM withdrwls but i cant. On the pivot chart they appear as ATM Withdrwal-REF-1234 and each of these "withdrwls" have different reference and as a result they show up as individual items on the chart...how can i group say all my ATM withdrwals as 1 ATM Withrdawl item so that it shows a 1 atm withdrwl item on my pivot chart?

    Read the article

  • How to create a reusable form using COCOA bindings.

    - by Juliano Sott
    hi. I want to make a user interface in which the user can edit two objects at the same time. The main window would have a vertical split view and a form on each side of the view. The problem is that the two forms are identical and I don't want to duplicate the view components in the interface builder. I want to create the form one time and add a reference to it in each side of the split view, each one using a different object source. I could use a NSForm, but the form is not a simple grid of outputTexts and inputText. They have a master table, and diverse kinds of inputs types, like combos, in the detail. How do I create the reusable form using the interface builder? Or how can I do it programmatically? Do I have to create a subclass of NSView and add the individual components in the code? Thanks, Juliano

    Read the article

  • Mirroring a portion of the screen to an external display (in OSX)

    - by Adam
    I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor). MFDex http://trac2.assembla.com/lightnings..._x86_Setup.msi I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage. Any ideas or pointers would be much appreciated. I just need somewhere to start from. Regards,

    Read the article

  • Uploading a Website

    - by 01010011
    Hi, This is my first time building a website and using CodeIgniter for a school project. I was wondering whether you have any tips on uploading CI to a free web host , my database, free webhosting and basic security tips. Can I just upload the entire CI folder? Or do I have to upload individual files (God no!)? What are my options? What about my MySQL database - do I just upload my mysqldump to the webhost? Also, can you recommend a good free webhost. I was thinking about 000webhost. Any basic tips on security would also be appreciated (I've implemented many of the form_validation rules like xss_clean for starters) Any other suggestions will be more than welcome. Thanks!

    Read the article

  • Scraping html WITHOUT uniquie identifiers using python

    - by Nicholas Law
    I would like to design an algorithm using python that scrapes thousands of pages like this one and this one, gathers all the data and inserts it into a MySQL database. The script will be run on a weekly or bi-weekly basis to update the database of any new information added to each individual page. Ideally I would like a scraper that is easy to work with for table structured data but also data that does not have unique identifiers (ie. id and classes attributes). Which scraper add-on should I use? BeautifulSoup, Scrapy or Mechanize? Are there any particular tutorials/books I should be looking at for this desired result? In the long-run I will be implementing a mobile app that works with all this data through querying the database.

    Read the article

  • Manage build target variables in iphone project

    - by DougW
    We're transitioning to an automated build process for our iphone projects. These projects can be checked out by individual devs, in which case all the API URLs need to point to a certain path. There are also a variety of build environments, each with their own API root paths. I could probably add multiple, different build targets, and have each of them include a different URLs definition file, but this seems like a lot of upkeep and a bit overkill. Any best practices out there for swapping a few environmental variables for different build environments without much fuss?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >