Search Results

Search found 8976 results on 360 pages for 'optimal solutions'.

Page 335/360 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • jquery data selector

    - by Tauren
    I need to select elements based on values stored in an element's .data() object. At a minimum, I'd like to select top-level data properties using selectors, perhaps like this: $('a').data("category","music"); $('a:data(category=music)'); Or perhaps the selector would be in regular attribute selector format: $('a[category=music]'); Or in attribute format, but with a specifier to indicate it is in .data(): $('a[:category=music]'); I've found James Padolsey's implementation to look simple, yet good. The selector formats above mirror methods shown on that page. There is also this Sizzle patch. For some reason, I recall reading a while back that jQuery 1.4 would include support for selectors on values in the jquery .data() object. However, now that I'm looking for it, I can't find it. Maybe it was just a feature request that I saw. Is there support for this and I'm just not seeing it? Ideally, I'd like to support sub-properties in data() using dot notation. Like this: $('a').data("user",{name: {first:"Tom",last:"Smith"},username: "tomsmith"}); $('a[:user.name.first=Tom]'); I also would like to support multiple data selectors, where only elements with ALL specified data selectors are found. The regular jquery multiple selector does an OR operation. For instance, $('a.big, a.small') selects a tags with either class big or small). I'm looking for an AND, perhaps like this: $('a').data("artist",{id: 3281, name: "Madonna"}); $('a').data("category","music"); $('a[:category=music && :artist.name=Madonna]'); Lastly, it would be great if comparison operators and regex features were available on data selectors. So $(a[:artist.id>5000]) would be possible. I realize I could probably do much of this using filter(), but it would be nice to have a simple selector format. What solutions are available to do this? Is Jame's Padolsey's the best solution at this time? My concern is primarily in regards to performance, but also in the extra features like sub-property dot-notation and multiple data selectors. Are there other implementations that support these things or are better in some way?

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Mouse wheel not scrolling in JDialog

    - by Iulian Serbanoiu
    Hello, I'm facing a frustrating issue. I have an application where the scroll wheel doesn't work in a JDialog class. Here's the code: import javax.swing.*; import java.awt.event.*; public class Failtest extends JFrame { public static void main(String[] args) { new Failtest(); } public Failtest() { super(); setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); setTitle("FRAME"); JScrollPane sp1 = new JScrollPane(getNewList()); add(sp1); setSize(150, 150); setVisible(true); JDialog d = new JDialog(this, false);// NOT WORKING //JDialog d = new JDialog((JFrame)null, false); // NOT WORKING //JDialog d = new JDialog((JDialog)null, false);// WORKING - WHY? d.setTitle("DIALOG"); d.setDefaultCloseOperation(JDialog.DISPOSE_ON_CLOSE); JScrollPane sp = new JScrollPane(getNewList()); d.add(sp); d.setSize(150, 150); d.setVisible(true); } public JList getNewList() { String objs[] = new String[30]; for(int i=0; i<objs.length; i++) { objs[i] = "Item "+i; } JList l = new JList(objs); return l; } } I found a solution which is present as a comment in the java code - the constructor receiving a (JDialog)null parameter. Can someone enlighten me? My opinion is that this is a java bug. Tested on Windows XP-SP3 with 1 JDK and 2 JREs: D:\Program Files\Java\jdk1.6.0_17\bin>javac -version javac 1.6.0_17 D:\Program Files\Java\jdk1.6.0_17\bin>java -version java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing) D:\Program Files\Java\jdk1.6.0_17\bin>cd .. D:\Program Files\Java\jdk1.6.0_17>java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thank you in advance, Iulian Serbanoiu PS: The problem is not new - the code is taken from a forum (here) where this problem was also mentioned - but no solutions to it (yet)

    Read the article

  • Would Ruby on Rails be appropriate for this Foreign Language project?

    - by Lynne Overesch-Maister
    I'm a Spanish professor & computer groupie. About 15 years ago, I authored in HyperCard a series of verb conjugation programs that are now completely out of date with respect to System OS X. I would like to redo these programs myself because I had a lot of fun doing them last time (mostly I coded while my son played in Leaps and Bounds, you know, one of those places where parents take their kids & let them run wild through the tubes...). Colleagues have mentioned using Flash, Director, and various other solutions, but I saw a presentation on RoR at our SIDLIT conference today, and was inspired. I will be parsing and comparing strings (and there are other features on top of that, but that is the main one), "adding" strings relationally indexed in some kind of database(s). It will also have to handle various foreign characters (accents, upside down question marks, etc.). On top of the main process of the program, it will have to provide a practice vs. test mode, keep track of specific answers as well as totals right/wrong, and print a report. Would this be either easier and/or more efficiently done in RoR than in other languages. I am pretty sure that it will work on a Microsoft server, right? Because I think that is where most of our stuff is. I would be programming either on a Mac or a PC, whichever you think is easier. So, in summary, is RoR the way for me to go with this project? If I have some (little) experience programming in Hypercard and C, should I be able to pick RoR up fairly quickly? What things will I need to start (I already saw something called Redhills foreign key migration plugin, which I assume would be beneficial)? I still have my old scripts from hypercard, however what I would really like to do is to combine all six of my former tense-specific programs into one larger program. I figure that it wouldn't be too hard to reference the individual tenses in some way--could that be a class? Many thanks for any help you can give me on this forum.

    Read the article

  • UITableViewCell with selectable/copyable text that also detects URLs on the iPhone

    - by Jasarien
    Hi guys, I have a problem. Part of my app requires text to be shown in a table. The text needs to be selectable/copyable (but not editable) and any URLs within the text need to be highlighted and and when tapped allow me to take that URL and open my embedded browser. I have seen a couple of solutions that solve one of either of these problems, but not both. Solution 1: Icon Factory's IFTweetLabel The first solution I tried was to use the IFTweetLabel class made possible by Icon Factory and used in Twitterrific. While this solution allows for links (or anything you can find with a regex) to be detected to be handled on a case by case basis, it doesn't allow for selecting and copying. There is also an issue where if a URL is long enough to be wrapped, the button that the class overlays above the URL to make it interactive cannot wrap and draws off screen, looking very odd. Solution 2: Use IFTweetLabel and handle copy manually The second thing I tried was to keep IFTweetLabel in place to handle the links, but to implement the copying using a long-tap gesture, like how the SMS app handles it. This was just about working, but it doesn't allow for arbitrary selection of text, the whole text is copied, or none is copied at all... Pretty black and white. Solution 3: UITextView My third attempt was to add a UITextView as a subview of the table cell. The only thing that this doesn't solve is the fact that detected URLs cannot be handled by me. The text view uses UIApplication's openURL: method which quits my app and launched Safari. Also, as the table view can get quite large, the number of UITextViews added as subviews cause a noticeable performance drag on scrolling throughout the table, especially on iPhone 3G era devices (because of the creation, layout, compositing whenever a cell is scrolled on screen, etc). So my question to all you knowledgeable folk out there is: What can I do? Would a UIWebView be the best option? Aside from a performance drag, I think a webview would solve all the above issues, and if I remember correctly, back in the 2.0 days, the Apple documentation actually recommended web views where text formatting / hyperlinks were required. Can anyone think of a way to achieve this without a performance drag? Many thanks in advance to everyone who can help.

    Read the article

  • C# Begin/EndReceive - how do I read large data?

    - by ryeguy
    When reading data in chunks of say, 1024, how do I continue to read from a socket that receives a message bigger than 1024 bytes until there is no data left? Should I just use BeginReceive to read a packet's length prefix only, and then once that is retrieved, use Receive() (in the async thread) to read the rest of the packet? Or is there another way? edit: I thought Jon Skeet's link had the solution, but there is a bit of a speedbump with that code. The code I used is: public class StateObject { public Socket workSocket = null; public const int BUFFER_SIZE = 1024; public byte[] buffer = new byte[BUFFER_SIZE]; public StringBuilder sb = new StringBuilder(); } public static void Read_Callback(IAsyncResult ar) { StateObject so = (StateObject) ar.AsyncState; Socket s = so.workSocket; int read = s.EndReceive(ar); if (read > 0) { so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read)); if (read == StateObject.BUFFER_SIZE) { s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0, new AyncCallback(Async_Send_Receive.Read_Callback), so); return; } } if (so.sb.Length > 0) { //All of the data has been read, so displays it to the console string strContent; strContent = so.sb.ToString(); Console.WriteLine(String.Format("Read {0} byte from socket" + "data = {1} ", strContent.Length, strContent)); } s.Close(); } Now this corrected works fine most of the time, but it fails when the packet's size is a multiple of the buffer. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. The problem is that the receive function doesn't know when the end of the packet is. This got me thinking to two possible solutions: I could either have an end-of-packet delimiter or I could read the packet header to find the length and then receive exactly that amount (as I originally suggested). There's problems with each of these, though. I don't like the idea of using a delimiter, as a user could somehow work that into a packet in an input string from the app and screw it up. It also just seems kinda sloppy to me. The length header sounds ok, but I'm planning on using protocol buffers - I don't know the format of the data. Is there a length header? How many bytes is it? Would this be something I implement myself? Etc.. What should I do?

    Read the article

  • Which StackOverflow-style MarkDown (WMD) javascript editor should I use?

    - by Edan Maor
    Background I'm working on an application which requires user-entered content, and I've decided to use a StackOverflow-style MarkDown editor. After researching this topic for the last few days, I realize there are numerous forks of the base WMD editor, some with a few basic enhancements and some with serious differences from the StackOverflow one. Since this will be the heart of the application, I'd like to start with the best code base I can. I'd be happy if anyone can recommend which one of the many solutions out there best fits my needs. Below is requirements, plus what I've managed to find already. I'm hoping this question will help me decide which version to go with, and maybe help me discover a port out there that's an even better fit for my needs. The requirements for my project Live Preview Multiple editors on the same page (not know how many in advance, since the user can dynamically add another editing box). Ability to extend with extra buttons (I'd like a button to upload a picture, instead of just adding an img url). Ability to dynamically show/hide the edit box (and only see the preview box). Not an absolute must, but I'd prefer to stick as close to StackOverflow's look and feel, since it's well known. Don't know if this matters, but the backend is written in Django. Editors I've looked at Here are a few of the code bases I've looked at, with thoughts. Obviously, I might be missing another solution out there. The derobins version. From what I can tell, this is the official StackOverflow version. Seems like it doesn't support multiple editors on one page. JQuery.MarkEdit. Looks very good, but is pretty different from the StackOverflow version. MooWMD. Looks like the winner right now, but I'm a little concerned since it looks less active/hackable than MarkEdit. The wmd-new version. Not sure, looks like an old codebase without much use. The SocialSite branch. Seems like it's not for public use.

    Read the article

  • NSStringWithFormat Swizzled to allow missing format numbered args

    - by coneybeare
    Based on this SO question asked a few hours ago, I have decided to implement a swizzled method that will allow me to take a formatted NSString as the format arg into stringWithFormat, and have it not break when omitting one of the numbered arg references (%1$@, %2$@) I have it working, but this is the first copy, and seeing as this method is going to be potentially called hundreds of thousands of times per app run, I need to bounce this off of some experts to see if this method has any red flags, major performance hits, or optimizations #define NUMARGS(...) (sizeof((int[]){__VA_ARGS__})/sizeof(int)) @implementation NSString (UAFormatOmissions) + (id)uaStringWithFormat:(NSString *)format, ... { if (format != nil) { va_list args; va_start(args, format); // $@ is an ordered variable (%1$@, %2$@...) if ([format rangeOfString:@"$@"].location == NSNotFound) { //call apples method NSString *s = [[[NSString alloc] initWithFormat:format arguments:args] autorelease]; va_end(args); return s; } NSMutableArray *newArgs = (NSMutableArray *)[NSMutableArray arrayWithCapacity:NUMARGS(args)]; id arg = nil; int i = 1; while (arg = va_arg(args, id)) { NSString *f = (NSString *)[NSString stringWithFormat:@"%%%d\$\@", i]; i++; if ([format rangeOfString:f].location == NSNotFound) continue; else [newArgs addObject:arg]; } va_end(args); char *newArgList = (char *)malloc(sizeof(id) * [newArgs count]); [newArgs getObjects:(id *)newArgList]; NSString* result = [[[NSString alloc] initWithFormat:format arguments:newArgList] autorelease]; free(newArgList); return result; } return nil; } The basic algorithm is: search the format string for the %1$@, %2$@ variables by searching for %@ if not found, call the normal stringWithFormat and return else, loop over the args if the format has a position variable (%i$@) for position i, add the arg to the new arg array else, don't add the arg take the new arg array, convert it back into a va_list, and call initWithFormat:arguments: to get the correct string. The idea is that I would run all [NSString stringWithFormat:] calls through this method instead. This might seem unnecessary to many, but click on to the referenced SO question (first line) to see examples of why I need to do this. Ideas? Thoughts? Better implementations? Better Solutions?

    Read the article

  • Convert a PDF to a Transparent PNG with GhostScript

    - by Jonathon Wolfe
    Hi all. I am attempting, unsuccessfully, to use Ghostscript to rasterize PDF files with a transparent background to PNG files with a transparent background. I've searched high and low for questions from others attempting the same thing and none of the posted solutions, which as far as I can tell come down to specifying -sDEVICE=pngalpha, have worked with my test files. At this point I would really appreciate any advice or tips a more experienced hand could provide. My test PDF is located here: http://www.kolossus.com/files/test.pdf It could be that the issue is with this file, but I doubt it. As far as I can tell, it has no specified background, and when I open the file with a transparency-aware app like Photoshop or Illustrator, sure enough it displays with a transparent background. However, when opened with an application like Adobe Reader the file is rendered with a white background. I believe that this has more to do with the application rendering the PDF than with the PDF itself -- apps like Adobe Reader assume you want to see what a printed document will look like and therefore always show a white canvas behind the artwork -- but I can't be sure. The gs command I'm using is: gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r72 -sOutputFile=test.png test.pdf This produces a PNG that has transparent pixels outside of the bounding box of the artwork in the file, but all pixels that are inside the artwork's bounding box are rasterized against a white background. This is a problem for me, as my artwork has drop shadows and antialiased edges that need to be preserved in the final output, and can't just be postprocessed out with ImageMagick. A sample of my PNG output is at the same location as the pdf above, with .png at the end (stackoverflow won't let me include more than one url in my post). Interestingly, I see no effects from using the -dBackgroundColor flag, even if I set it to something non-white like -dBackgroundColor=16#ff0000. Perhaps my understanding of the syntax of this flag is wrong. Also interestingly, I see no effects from using the -dTextAlphaBits=4 -dGraphicsAlphaBits=4 flags to try to enable subpixel antialiasing. I would also appreciate any advice on how to enable subpixel antialiasing, especially on text. Finally, I'm using GPL Ghostscript 8.64 on Mac OS 10.5.7, and the rendering workflow I'm trying to get set up is to generate transparent PNG images from PDFs output by PrinceXML. I'm calling Ghostscript directly for the rasterization instead of using ImageMagick because ImageMagick delegates to Ghostscript for PDF rasterization and I should be able to control the rasterization better by calling GS directly. Thanks for your help. -Jon Wolfe

    Read the article

  • Render a view as a string

    - by Dan Atkinson
    Hi all! I'm wanting to output two different views (one as a string that will be sent as an email), and the other the page displayed to a user. Is this possible in ASP.NET MVC beta? I've tried multiple examples: RenderPartial to String in ASP.NET MVC Beta If I use this example, I receive the "Cannot redirect after HTTP headers have been sent.". MVC Framework: Capturing the output of a view If I use this, I seem to be unable to do a redirectToAction, as it tries to render a view that may not exist. If I do return the view, it is completely messed up and doesn't look right at all. Does anyone have any ideas/solutions to these issues i have, or have any suggestions for better ones? Many thanks! Below is an example. What I'm trying to do is create the GetViewForEmail method: public ActionResult OrderResult(string ref) { //Get the order Order order = OrderService.GetOrder(ref); //The email helper would do the meat and veg by getting the view as a string //Pass the control name (OrderResultEmail) and the model (order) string emailView = GetViewForEmail("OrderResultEmail", order); //Email the order out EmailHelper(order, emailView); return View("OrderResult", order); } Accepted answer from Tim Scott (changed and formatted a little by me): public virtual string RenderViewToString( ControllerContext controllerContext, string viewPath, string masterPath, ViewDataDictionary viewData, TempDataDictionary tempData) { Stream filter = null; ViewPage viewPage = new ViewPage(); //Right, create our view viewPage.ViewContext = new ViewContext(controllerContext, new WebFormView(viewPath, masterPath), viewData, tempData); //Get the response context, flush it and get the response filter. var response = viewPage.ViewContext.HttpContext.Response; response.Flush(); var oldFilter = response.Filter; try { //Put a new filter into the response filter = new MemoryStream(); response.Filter = filter; //Now render the view into the memorystream and flush the response viewPage.ViewContext.View.Render(viewPage.ViewContext, viewPage.ViewContext.HttpContext.Response.Output); response.Flush(); //Now read the rendered view. filter.Position = 0; var reader = new StreamReader(filter, response.ContentEncoding); return reader.ReadToEnd(); } finally { //Clean up. if (filter != null) { filter.Dispose(); } //Now replace the response filter response.Filter = oldFilter; } } Example usage Assuming a call from the controller to get the order confirmation email, passing the Site.Master location. string myString = RenderViewToString(this.ControllerContext, "~/Views/Order/OrderResultEmail.aspx", "~/Views/Shared/Site.Master", this.ViewData, this.TempData);

    Read the article

  • Which third party website thumbnailing services do you use?

    - by Ben Delarre
    I've got a requirement for showing thumbnails of arbitrary websites. I need to be able to show small thumbnails (120px by 90px), and larger thumbnails of around 480px wide. I'll need to specify the queue and invalid placeholder images and preferably have a pingback when the queued images are processed so I can respond appropriately. I'd also need a simple API I can use either directly embedded in my HTML, or from a simple web request to queue the images. I've been looking at various services ranging from low-fi services, to large scale ones - here's some examples: www.bitpixels.com Uses Google AppEngine, seems like a prototype or a toy. Free! www.websnapr.com Tried using this, made a free account and requested a thumbnail. Waited a few minutes and refreshed a couple of times, and ended up having the account banned. Free is tricky yes, but if I can't try it out successfully I'm disinclined to pay. www.shrinktheweb.com Free account seems to be very quick. Lots of documentation on the site, and even covers local caching of the images to your own server (documentation mostly in PHP). Quality of thumbnails look good, and there appear to be sufficient options for setting thumbnail placeholder images and parameters for altering how the thumbnailing is done. Also supports large 'screenshots' of URLs - very useful for me. Discovered the PRO pricing is an à la carte menu, allowing me to select just the features I want and keep the monthly cost low. Excellent stuff, have decided to use this service. www.thumbalizr.com Good coverage of thumbnail sizes and control options - even allowing specification for browser width when thumbnailing. No ping-back, but I can live without that. Supports local caching of images with PHP API, would prefer .NET, but can port it if necessary. Looks like a fairly professional service but seems fairly expensive for the number of thumbnails you get to generate. apologies for lack of proper linking - spam protection! I'm not entirely convinced by any of them, and since this will be a long term service I'd like some stability and support. I'm willing to pay for the service, but I'd want something that fulfills most if not all of my requirements for that. I should also mention that we're hosted on Windows under IIS, so local solutions involving Xvfb and the like sadly can't be used for this project. So my question is: what services do you use? How have they panned out, are you happy with them?

    Read the article

  • NSString stringWithFormat swizzled to allow missing format numbered args

    - by coneybeare
    Based on this SO question asked a few hours ago, I have decided to implement a swizzled method that will allow me to take a formatted NSString as the format arg into stringWithFormat, and have it not break when omitting one of the numbered arg references (%1$@, %2$@) I have it working, but this is the first copy, and seeing as this method is going to be potentially called hundreds of thousands of times per app run, I need to bounce this off of some experts to see if this method has any red flags, major performance hits, or optimizations #define NUMARGS(...) (sizeof((int[]){__VA_ARGS__})/sizeof(int)) @implementation NSString (UAFormatOmissions) + (id)uaStringWithFormat:(NSString *)format, ... { if (format != nil) { va_list args; va_start(args, format); // $@ is an ordered variable (%1$@, %2$@...) if ([format rangeOfString:@"$@"].location == NSNotFound) { //call apples method NSString *s = [[[NSString alloc] initWithFormat:format arguments:args] autorelease]; va_end(args); return s; } NSMutableArray *newArgs = (NSMutableArray *)[NSMutableArray arrayWithCapacity:NUMARGS(args)]; id arg = nil; int i = 1; while (arg = va_arg(args, id)) { NSString *f = (NSString *)[NSString stringWithFormat:@"%%%d\$\@", i]; i++; if ([format rangeOfString:f].location == NSNotFound) continue; else [newArgs addObject:arg]; } va_end(args); char *newArgList = (char *)malloc(sizeof(id) * [newArgs count]); [newArgs getObjects:(id *)newArgList]; NSString* result = [[[NSString alloc] initWithFormat:format arguments:newArgList] autorelease]; free(newArgList); return result; } return nil; } The basic algorithm is: search the format string for the %1$@, %2$@ variables by searching for %@ if not found, call the normal stringWithFormat and return else, loop over the args if the format has a position variable (%i$@) for position i, add the arg to the new arg array else, don't add the arg take the new arg array, convert it back into a va_list, and call initWithFormat:arguments: to get the correct string. The idea is that I would run all [NSString stringWithFormat:] calls through this method instead. This might seem unnecessary to many, but click on to the referenced SO question (first line) to see examples of why I need to do this. Ideas? Thoughts? Better implementations? Better Solutions?

    Read the article

  • python Socket.IO client for sending broadcast messages to TornadIO2 server

    - by Alp
    I am building a realtime web application. I want to be able to send broadcast messages from the server-side implementation of my python application. Here is the setup: socketio.js on the client-side TornadIO2 server as Socket.IO server python on the server-side (Django framework) I can succesfully send socket.io messages from the client to the server. The server handles these and can send a response. In the following i will describe how i did that. Current Setup and Code First, we need to define a Connection which handles socket.io events: class BaseConnection(tornadio2.SocketConnection): def on_message(self, message): pass # will be run if client uses socket.emit('connect', username) @event def connect(self, username): # send answer to client which will be handled by socket.on('log', function) self.emit('log', 'hello ' + username) Starting the server is done by a Django management custom method: class Command(BaseCommand): args = '' help = 'Starts the TornadIO2 server for handling socket.io connections' def handle(self, *args, **kwargs): autoreload.main(self.run, args, kwargs) def run(self, *args, **kwargs): port = settings.SOCKETIO_PORT router = tornadio2.TornadioRouter(BaseConnection) application = tornado.web.Application( router.urls, socket_io_port = port ) print 'Starting socket.io server on port %s' % port server = SocketServer(application) Very well, the server runs now. Let's add the client code: <script type="text/javascript"> var sio = io.connect('localhost:9000'); sio.on('connect', function(data) { console.log('connected'); sio.emit('connect', '{{ user.username }}'); }); sio.on('log', function(data) { console.log("log: " + data); }); </script> Obviously, {{ user.username }} will be replaced by the username of the currently logged in user, in this example the username is "alp". Now, every time the page gets refreshed, the console output is: connected log: hello alp Therefore, invoking messages and sending responses works. But now comes the tricky part. Problems The response "hello alp" is sent only to the invoker of the socket.io message. I want to broadcast a message to all connected clients, so that they can be informed in realtime if a new user joins the party (for example in a chat application). So, here are my questions: How can i send a broadcast message to all connected clients? How can i send a broadcast message to multiple connected clients that are subscribed on a specific channel? How can i send a broadcast message anywhere in my python code (outside of the BaseConnection class)? Would this require some sort of Socket.IO client for python or is this builtin with TornadIO2? All these broadcasts should be done in a reliable way, so i guess websockets are the best choice. But i am open to all good solutions.

    Read the article

  • Would Like Multiple Checkboxes to Update World PNG Image Using Mogrify to FloodFill Countries With C

    - by socrtwo
    Hi. I'm seeing in another forum if the best way to do this is with Javascript or Ajax but I'm wondering if there is an even easier simpler way. I'm trying to create a web service where users can check which countries they have visited from a list of 175 or so and a World map image would then instantly update with a filled color. There are other similar services, but I'm envisioning mine to be both updating from checks in checkboxes and by clicking on the target country in the displayed image say with an imagemap. Additionally other solutions display all the visited countries in the same color. I would like different colors for different countries or at least for those countries that touch. Eventually I would like to include a feature that enables the choice of which colors to assign countries. I found a Sourceforge project called pwmfccd. It's simply an open source image of the world and the coordinates on the PNG image for all the countries. You can use mogrify from ImageMagick and floodfill to fill the countries with color. I have done this successfully, locally with batch files. My ISP has told me where mogrify is located, basically "/usr/bin/mogrify". I now have a horrendously complicated cgi script which if it worked is set to redraw the world map image with each checkbox. It's here. It also redraws the whole web page with each check. The web page starts here. Of course this is not at all efficient, and I think probably the real way to go is Ajax or Javascript, so that maybe just the image gets changed and redrawn, not the whole web page. Sorry I don't even know the difference between Javascript and Ajax and their relative merits at this point. I suppose you could make just one part of the image update with each check or click on the image instead of even just the image redrawing, but I have never even heard of a hint at being able to do that for irregularly shaped image elements like countries. So I guess an Image map and sister checkbox entries tied to mogrify events redrawing the user's personal copy of the image with an image refresh would be the only way to go. So how do you do this with something other than Javascript or Ajax or is that definitely the way to go and if so, how would you do it? Or can you after all cut up a web based image into irregular puzzle shaped piece which you can redraw individually at will. Thanks in advance for reading and considering answering this post.

    Read the article

  • IoC/DI in the face of winforms and other generated code

    - by Kaleb Pederson
    When using dependency injection (DI) and inversion of control (IoC) objects will typically have a constructor that accepts the set of dependencies required for the object to function properly. For example, if I have a form that requires a service to populate a combo box you might see something like this: // my files public interface IDataService { IList<MyData> GetData(); } public interface IComboDataService { IList<MyComboData> GetComboData(); } public partial class PopulatedForm : BaseForm { private IDataService service; public PopulatedForm(IDataService service) { //... InitializeComponent(); } } This works fine at the top level, I just use my IoC container to resolve the dependencies: var form = ioc.Resolve<PopulatedForm>(); But in the face of generated code, this gets harder. In winforms a second file composing the rest of the partial class is generated. This file references other components, such as custom controls, and uses no-args constructors to create such controls: // generated file: PopulatedForm.Designer.cs public partial class PopulatedForm { private void InitializeComponent() { this.customComboBox = new UserCreatedComboBox(); // customComboBox has an IComboDataService dependency } } Since this is generated code, I can't pass in the dependencies and there's no easy way to have my IoC container automatically inject all the dependencies. One solution is to pass in the dependencies of each child component to PopulatedForm even though it may not need them directly, such as with the IComboDataService required by the UserCreatedComboBox. I then have the responsibility to make sure that the dependencies are provided through various properties or setter methods. Then, my PopulatedForm constructor might look as follows: public PopulatedForm(IDataService service, IComboDataService comboDataService) { this.service = service; InitializeComponent(); this.customComboBox.ComboDataService = comboDataService; } Another possible solution is to have the no-args constructor to do the necessary resolution: public class UserCreatedComboBox { private IComboDataService comboDataService; public UserCreatedComboBox() { if (!DesignMode && IoC.Instance != null) { comboDataService = Ioc.Instance.Resolve<IComboDataService>(); } } } Neither solution is particularly good. What patterns and alternatives are available to more capably handle dependency-injection in the face of generated code? I'd love to see both general solutions, such as patterns, and ones specific to C#, Winforms, and Autofac.

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • Upload large files via a webpage

    - by Hultner
    What way is the best way to let users upload large files from there webbrowser to a server. I'm talking 200MB+ possible up to a few gigatyes. I have been thinking of a few possible solutions to the problem (not tried them yet) and this is basically the things I came up with. Server download speed will not be a problem but the users connection possibly could. Having some sort of applet on the client side written in Java or Flash which sends the file in parts (is this possible with an applet) to a php/other script on the server and a checksum+ some other info about the file. On the server scripts all the parts and the info file is saved in a temporary directory wich has a unique name based on the checksum of the file and the ip of the user. When the last chunk is sent the applet sends a signal to the server saying it's finished and the server put the file together in the right location. If a chunk doesn't match the checksum for that part the server will send a response to the applet telling it to reupload that chunk. I don't know how important the checksum checking is since it's all tcpackages, someone with more insigth migth be able to answer on that. This is probably the worst way, changing the settings on your server to allow huge fileuploads via an inputfiel. Do it like a normal transfer. User an uploadmanager which does pretty much the same thing as applet i mentioned above. Pros of the first is probably that it would most likely be rather secure, you could show progress as well and possibly resume an upload if ip hasn't changed and do a threaded upload of the chunks. Cons of the first is that the user will need flash/java for it to work. Pros of the 2nd is that it will pretty much work for everyone but cons are big, first there's no way resuming an intruppted download and if something is wrong the whole file would have to be reuploaded is a few of cons. For the third one the pros is pretty muc the same as for the first but the cons is that the user would have to download an application to their computer and run and the application will have to be have to be compatible with their computer and OS. Another way may be a combination of two. Lets say an applet for bigger or more files and a simple input which is rather restricted to maybe max 10-20MB for smaller files and comability. There are probably other much smarter ways to tackle this and that's why I'm asking for advice here on SO.

    Read the article

  • How to commit into TortoiseSVN using cruise control config file

    - by pratap
    hi all, can any one tell how to commit into tortoisesvn using cruise control config file. I am getting an error "C:***\Documentation\trunk\dotnet\svn" is not executable or it may not exist. here's the config part... <workingDirectory>C:\*****\Documentation\trunk\dotnet\</workingDirectory> <category>Individual Solutions</category> <modificationDelaySeconds>10</modificationDelaySeconds> <sourcecontrol type="svn"> <trunkUrl>******* svn url *********</trunkUrl> <username> unname </username> <password> pwd </password> <autoGetSource>true</autoGetSource> </sourcecontrol> <tasks> <exec> <executable>C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\MSBuild.exe</executable> <buildTimeoutSeconds>1200</buildTimeoutSeconds> <successExitCodes>0</successExitCodes> </exec> <exec> <executable>iisreset</executable> <buildArgs>/stop</buildArgs> </exec> <exec> <executable>c:\Program Files\TortoiseSVN\bin\TortoiseProc.exe /command:commit /path:"C:\*****\Documentation\trunk\dotnet\"</executable> <buildTimeoutSeconds>1200</buildTimeoutSeconds> <successExitCodes>0</successExitCodes> <description>checkin shared content...</description> </exec> <exec> <executable>iisreset</executable> <buildArgs>/start</buildArgs> </exec> </tasks> </project> Thank you all,

    Read the article

  • How to modularize a JSF/Facelets/Spring application with OSGi?

    - by lexicore
    I'm working with very large JSF/Facelets applications which use Spring for DI/bean management. My applications have modular structure and I'm currently looking for approaches to standardize the modularization. My goal is to compose a web application from a number of modules (possibly depending on each other). Each module may contain the following: Classes; Static resources (images, CSS, scripts); Facelet templates; Managed beans - Spring application contexts, with request, session and application-scoped beans (alternative is JSF managed beans); Servlet API stuff - servlets, filters, listeners (this is optional). What I'd like to avoid (almost at all costs) is the need to copy or extract module resources (like Facelets templates) to the WAR or to extend the web.xml for module's servlets, filters, etc. It must be enough to add the module (JAR, bundle, artifact, ...) to the web application (WEB-INF/lib, bundles, plugins, ...) to extend the web application with this module. Currently I solve this task with a custom modularization solution which is heavily based on using classpath resources: Special resources servlet serves static resources from classpath resources (JARs). Special Facelets resource resolver allows loading Facelet templates from classpath resources. Spring loads application contexts via the pattern classpath*:com/acme/foo/module/applicationContext.xml - this loads application contexts defined in module JARs. Finally, a pair of delegating servlets and filters delegate request processing to the servlets and filters configured in Spring application contexts from modules. Last days I read a lot about OSGi and I was considering, how (and if) I could use OSGi as a standardized modularization approach. I was thinking about how individual tasks could be solved with OSGi: Static resources - OSGi bundles which want to export static resources register a ResourceLoader instances with the bundle context. A central ResourceServlet uses these resource loaders to load resources from bundles. Facelet templates - similar to above, a central ResourceResolver uses services registered by bundles. Managed beans - I have no idea how to use an expression like #{myBean.property} if myBean is defined in one of the bundles. Servlet API stuff - use something like WebExtender/Pax Web to register servlets, filters and so on. My questions are: Am I inventing a bicycle here? Are there standard solutions for that? I've found a mentioning of Spring Slices but could not find much documentation about it. Do you think OSGi is the right technology for the described task? Is my sketch of OSGI application more or less correct? How should managed beans (especially request/session scope) be handled? I'd be generally graefult for your comments.

    Read the article

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • C++ print out a binary search tree

    - by starcorn
    Hello, Got nothing better to do this Christmas holiday, so I decided to try out making a binary search tree. I'm stuck with the print function. How should the logic behind it work? Since the tree is already inserting it in a somewhat sorted order, and I want to print the tree from smallest values to the biggest. So I need to travel to the furthest left branch of the tree to print the first value. Right, so after that how do I remember the way back up, do I need to save the previous node? A search in wikipedia gave me an solution which they used stack. And other solutions I couldn't quite understand how they've made it, so I'm asking here instead hoping someone can enlight me. I also wonder my insert function is OK. I've seen other's solution being smaller. void treenode::insert(int i) { if(root == 0) { cout << "root" << endl; root = new node(i,root); } else { node* travel = root; node* prev; while(travel) { if(travel->value > i) { cout << "travel left" << endl; prev = travel; travel = travel->left; } else { cout << "travel right" << endl; prev = travel; travel = travel->right; } } //insert if(prev->value > i) { cout << "left" << endl; prev->left = new node(i); } else { cout << "right" << endl; prev->right = new node(i); } } } void treenode::print() { node* travel = root; while(travel) { cout << travel->value << endl; travel = travel->left; } }

    Read the article

  • Can't use attached property on combobox inside hierarchical datatemplate WPF

    - by jesse_t_r
    I'm hoping to use an attached property to assign a command to the selection changed event of a combobox that is embedded inside a treeview. I'm attempting to set the attached property inside the hierchical data template for the tree but the command is not set and does not fire when the item in the combobox is changed. I've found that setting the attached property directly on a combobox outside of a datatemplate works fine; here is how I'm trying to set the property in the template: <HierarchicalDataTemplate x:Key="template1" ItemsSource="{Binding Path=ChildColumns}"> <Border Background="{StaticResource TreeItem_Background}" BorderBrush="Blue" BorderThickness="2" CornerRadius="5" Margin="2,5,5,2" HorizontalAlignment="Left" > <Grid> <Grid.ColumnDefinitions > <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock MinWidth="80" HorizontalAlignment="Left" Grid.Column="0" Margin="5,2,2,2" Grid.Row ="0" Text="{Binding Path=ColName}"/> <ComboBox Name="cboColType" Grid.Column="1" HorizontalAlignment="Right" ItemsSource="{Binding Source={StaticResource dataFromEnum}}" SelectedItem="{Binding Path=ColumnType}" Margin="2,2,2,2" local:ItemSelectedBehavior.ItemSelected="{Binding Path=LoadConfigCommand}" /> </Grid> </Border> </HierarchicalDataTemplate> I also tried creating a style <Style x:Key="childItemStyle" TargetType="{x:Type FrameworkElement}"> <Setter Property="local:ItemSelectedBehavior.ItemSelected" Value="{Binding Path=LoadConfigCommand}" /> </Style> and setting the itemcontainerstyle to the style in the hierarchical datatemplate..still no luck .. <HierarchicalDataTemplate> ... <ComboBox Name="cboColType" Grid.Column="1" HorizontalAlignment="Right" ItemsSource="{Binding Source={StaticResource dataFromEnum}}" SelectedItem="{Binding Path=ColumnType}" Margin="2,2,2,2" ItemContainerStyle={StaticeResource childItemStyle}" /> ... </HierarchicalDataTemplate> I'm still learning a lot about WPF so I'm assuming there is something particular about the hierchical datatemplate that is not allowing the attache dproperty to be set..I have found similar posts in the forums and tried to implement their solutions as above, but after a day of searching and experimenting wiht no luck I'm hoping some one has an idea about this...

    Read the article

  • How effective are technical test(s) and is it necessary?

    - by The Elite Gentleman
    Hi everyone, I recently took a Java technical test (for a company who wanted are looking for senior java developers) and, funny enough, I only realised the technical terms of what I've been doing all along after I've written the test. I'm not too IT jargon when it comes to development but I can pretty much code and create solutions unaware that I'm using design pattern (or the specifics of that design pattern) or technology. I learned things such as JMS, Frameworks, etc. while programming at home and having to google stuff online to problems I have. Others e.g. IoC, Surrogates in Databases, etc., I have used extensively without knowing that it had a name for it. Do you think that these technical test are effective and why? What interesting questions did you find that boggled your brains out while the clock kept ticking? Seeing that IT is vastly evolving at a rapid rate, do we have to constantly be updated with new terms that comes out? Some questions I was asked : What object oriented principle is violated by this architectural mechanism for dot notation? Is indexing tables effective for range query or point query search? What is ThreadLocal and what is it used for? Method overloading vs Method overriding. What is the difference between the 2? What is dynamic binding? Now, imagine my poor head trying to understand these jargons (considering I use it almost everyday) PS The question was not a programming question, where you have a problem and write code to solve it. Rather, a thinking type question and you write answers (against the clock). Update I clearly didn't come out clearly as I should have. There are those that are technically "book smart" but with very little hands-on experience and vice versa. So, the question (in connection to what I've asked) is that are these technical test seeking "book smart" people or people with lots of hands-on experience (some who are not that well clued up with too much book-smart jargons). How effective is it then, for companies to look for developers if most of the questions are too terminology-centric? (if that's the correct term, :))

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >