Search Results

Search found 4109 results on 165 pages for 'plan'.

Page 153/165 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Why isn't my query using any indices when I use a subquery?

    - by sfussenegger
    I have the following tables (removed columns that aren't used for my examples): CREATE TABLE `person` ( `id` int(11) NOT NULL, `name` varchar(1024) NOT NULL, `sortname` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `sortname` (`sortname`(255)), KEY `name` (`name`(255)) ); CREATE TABLE `personalias` ( `id` int(11) NOT NULL, `person` int(11) NOT NULL, `name` varchar(1024) NOT NULL, PRIMARY KEY (`id`), KEY `person` (`person`), KEY `name` (`name`(255)) ) Currently, I'm using this query which works just fine: select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; mysql> explain select p.* from person p where name = 'John Mayer' or sortname = 'John Mayer'; +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ | 1 | SIMPLE | p | index_merge | name,sortname | name,sortname | 767,767 | NULL | 3 | Using sort_union(name,sortname); Using where | +----+-------------+-------+-------------+---------------+---------------+---------+------+------+----------------------------------------------+ 1 row in set (0.00 sec) Now I'd like to extend this query to also consider aliases. First, I've tried using a join: select p.* from person p join personalias a where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; mysql> explain select p.* from person p join personalias a on p.id = a.person where p.name = 'John Mayer' or p.sortname = 'John Mayer' or a.name = 'John Mayer'; +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ | 1 | SIMPLE | a | ALL | ref,name | NULL | NULL | NULL | 87401 | Using temporary | | 1 | SIMPLE | p | eq_ref | PRIMARY,name,sortname | PRIMARY | 4 | musicbrainz.a.ref | 1 | Using where | +----+-------------+-------+--------+-----------------------+---------+---------+-------------------+-------+-----------------+ 2 rows in set (0.00 sec) This looks bad: no index, 87401 rows, using temporary. Using temporary only appears when I use distinct, but as an alias might be the same as the name, I can't really get rid of it. Next, I've tried to replace the join with a subquery: select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select person from personalias a where a.name = 'John Mayer'); mysql> explain select p.* from person p where p.name = 'John Mayer' or p.sortname = 'John Mayer' or p.id in (select id from personalias a where a.name = 'John Mayer'); +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ | 1 | PRIMARY | p | ALL | name,sortname | NULL | NULL | NULL | 540309 | Using where | | 2 | DEPENDENT SUBQUERY | a | index_subquery | person,name | person | 4 | func | 1 | Using where | +----+--------------------+-------+----------------+------------------+--------+---------+------+--------+-------------+ 2 rows in set (0.00 sec) Again, this looks pretty bad: no index, 540309 rows. Interestingly, both queries (select p.* from person ... or p.id in (4711,12345) and select id from personalias a where a.name = 'John Mayer') work extremely well. Why doesn't MySQL use any indices for both of my queries? What else could I do? Currently, it looks best to fetch person.ids for aliases and add them statically as an in(...) to the second query. There certainly has to be another way to do this with a single query. I'm currently out of ideas though. Could I somehow force MySQL into using another (better) query plan?

    Read the article

  • debugging JBoss 100% CPU usage

    - by NateS
    Originally posted on Server Fault, where it was suggested this question might better asked here. We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • Fixed point math in c#?

    - by x4000
    Hi there, I was wondering if anyone here knows of any good resources for fixed point math in c#? I've seen things like this (http://2ddev.72dpiarmy.com/viewtopic.php?id=156) and this (http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math), and a number of discussions about whether decimal is really fixed point or actually floating point (update: responders have confirmed that it's definitely floating point), but I haven't seen a solid C# library for things like calculating cosine and sine. My needs are simple -- I need the basic operators, plus cosine, sine, arctan2, PI... I think that's about it. Maybe sqrt. I'm programming a 2D RTS game, which I have largely working, but the unit movement when using floating-point math (doubles) has very small inaccuracies over time (10-30 minutes) across multiple machines, leading to desyncs. This is presently only between a 32 bit OS and a 64 bit OS, all the 32 bit machines seem to stay in sync without issue, which is what makes me think this is a floating point issue. I was aware from this as a possible issue from the outset, and so have limited my use of non-integer position math as much as possible, but for smooth diagonal movement at varying speeds I'm calculating the angle between points in radians, then getting the x and y components of movement with sin and cos. That's the main issue. I'm also doing some calculations for line segment intersections, line-circle intersections, circle-rect intersections, etc, that also probably need to move from floating-point to fixed-point to avoid cross-machine issues. If there's something open source in Java or VB or another comparable language, I could probably convert the code for my uses. The main priority for me is accuracy, although I'd like as little speed loss over present performance as possible. This whole fixed point math thing is very new to me, and I'm surprised by how little practical information on it there is on google -- most stuff seems to be either theory or dense C++ header files. Anything you could do to point me in the right direction is much appreciated; if I can get this working, I plan to open-source the math functions I put together so that there will be a resource for other C# programmers out there. UPDATE: I could definitely make a cosine/sine lookup table work for my purposes, but I don't think that would work for arctan2, since I'd need to generate a table with about 64,000x64,000 entries (yikes). If you know any programmatic explanations of efficient ways to calculate things like arctan2, that would be awesome. My math background is all right, but the advanced formulas and traditional math notation are very difficult for me to translate into code.

    Read the article

  • Looking for efficient scaling patterns for Silverlight application with distributed text-file data s

    - by Edward Tanguay
    I'm designing a Silverlight software solution for students and teachers to record flashcards, e.g. words and phrases that students find while reading and errors that teachers notice while teaching. Requirements are: each person publishes his own flashcards in a file on a web server, e.g. http://:www.mywebserver.com/flashcards.txt other people subscribe to that person's flashcards by using a Silverlight flashcard reader that I have developed and entering the URLs of flashcard files they want to subscribe to, URLs and imported flashcards being saved in IsolatedStorage the flashcards.txt file has the following simple format: title, then blocks of question/answers: Jim Smith's flashcards from English class 53-222, winter semester 2009 ==fla Das kann nicht sein. That can't be. ==fla Es sei denn, er kommt nicht. Unless he doesn't come. The user then makes public the URL to his flashcard file and other readers begin reading in his flashcards. In order to lower the bar for non-technical users to contribute, it will even be possible for them to save this text in a Google Document, which they publish and distribute the URL. The flashcard readers will then recognize it is a google document and perform the necessary screen scraping to get at the raw text. I have two technical questions about this approach: What is a best way to plan now for scalability issues: e.g. if your reader is subscribed to 10 flashcard files that are each 200K, it will have to download 2MB of text just to find out if any new flashcards are available. Or can I somehow accurately and consistently get at the last update date/time of text files on servers and published google docs? Each reader will have the ability to allow the person to test himself on imported flashcards and add meta information to them, e.g. categorize them, edit them, etc. This information will be stored in IsolatedStorage along with the important flashcards themselves. What is a good pattern to allow these readers to share and synchronize this meta data, e.g. so when you are looking at a flashcard you can see that 5 other people have made corrections to it. The best solution I can think of now is that the Silverlight readers will have to republish their data to a central database, but then there is the problem of uniquely identifying each flashcard, the best approach seems to be URL + position-in-file, or even better URL + original text of both question and answer fields, but both of these have their obvious drawbacks. The main requirement is that the bar for participation is kept as low as possible, i.e. type text in a google document, publish it, distribute the URL, and you're publishing within the flashcard community. So I want to come up with the most efficient technical solutions in order to compensate for the lack of database, lack of unique ids, etc. For those who have designed or developed similar non-traditional, distributed database projects like this, what advice, experience or best-practice tips you can share on the above two points?

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • Creating a variable list Pashua, OS X & Bash.

    - by S1syphus
    First of all, for those that don't know pashua is a tool for creating native Aqua dialog windows. An example of what a window config looks like is # pashua_run() # Define what the dialog should be like # Take a look at Pashua's Readme file for more info on the syntax conf=" # Set transparency: 0 is transparent, 1 is opaque *.transparency=0.95 # Set window title *.title = Introducing Pashua # Introductory text tb.type = text tb.default = "HELLO WORLD" tb.height = 276 tb.width = 310 tb.x = 340 tb.y = 44 if [ -e "$icon" ] then # Display Pashua's icon conf="$conf img.type = image img.x = 530 img.y = 255 img.path = $icon" fi if [ -e "$bgimg" ] then # Display background image conf="$conf bg.type = image bg.x = 30 bg.y = 2 bg.path = $bgimg" fi pashua_run "$conf" echo " tb = $tb" The problem is, Pashua can't really get output from stdout, but it can get arguments. Following on from what Dennis Williamson posted here. What ideally it should do is generate an output file based on information from a text file, To executed in pashua_run ore add the pashua_run around the window argument: count=1 while read -r i do echo "AB${count}.type = openbrowser" echo "AB${count}.label = Choose a master playlist file" echo "AB${count}.width=310" echo "AB${count}.tooltip = Blabla filesystem browser" echo "some text with a line from the file: $i" (( count++ )) done < TEST.txt >> long.txt SO the output is AB1.type = openbrowser AB1.label = Choose a master playlist file AB1.width=310 AB1.tooltip = Blabla filesystem browser some text with a line from the file: foo AB2.type = openbrowser AB2.label = Choose a master playlist file AB2.width=310 AB2.tooltip = Blabla filesystem browser some text with a line from the file: bar AB3.type = openbrowser AB3.label = Choose a master playlist file AB3.width=310 AB3.tooltip = Blabla filesystem browser some text with a line from the file: dev AB4.type = openbrowser AB4.label = Choose a master playlist file AB4.width=310 AB4.tooltip = Blabla filesystem random So if there is a clever way to get the output of that and place it into pashua run would be cool, on the fly: I.E load te contents of TEST.txt and generate the place it into pashua_run, I've tried using cat and opening the file... but because it's in Pashua_run it doesn't work, is there a smart way to break out then back in? Or the second way which I was thinking, was create get the output then append it into the middle text file containing the pashua runtime, then execute it, maybe slightly hacky, but I would imagine it will do the job. Any ideas? ++ I know I probably could make my life a lot easier, by doing this in actionscript and cocoa, although at present don't have time for such a learning curve, although I do plan to get round to it.

    Read the article

  • Volunteer for a potential employer?

    - by EoRaptor013
    I've been looking for work since March, and haven't had much luck. Recently, however, I interviewed with a small company near my home for a C#, .NET, SQL development position. I hit it off very well with the hiring manager during the phone screen, and even more so during the face to face. Unfortunately, I failed the practical test: wiring up a web form, creating a couple of SQL stored procedures, saving new data with validation, and creating a minimal search screen. I knew what I was doing, but I was too slow to meet their standards as all the work needed to be done within an hour. Nevertheless, I really liked the place, the environment, the people who I would have been working with, and the boss. (I gave the company an 11 on Joel's 12 point scale.) So, the obvious next step was to scrape the rust off. I've been trying to create little projects for myself, but I don't know that I've been effective in getting any faster. What with all that goes into creating a project, I'm not heads-down coding as much as I think I need. Now, with all that introduction, here's the question. I have been thinking about calling the hiring manager at that place, and asking him to let me volunteer for three or four weeks, with no strings attached. I think it would benefit me, and wouldn't cost him anything (as long as I didn't slow the existing people down!). At the end of that period, he might, or might not, be inclined to hire me, but even if not, I would have had as much as 160 hours of in the trenches development. Maybe not all shiny, but no more rust, I would think. Does this plan make any sense at all? I certainly don't want to sound desperate (although, I'm not far from being there), and I very much need the tuneup, lube, and change the oil. What's the downside, if any, to me doing this? Do any of you see red flags going up—either from the prerspective of the hiring manager, or from the perspective of a developer?

    Read the article

  • How do I get a screenshot of a given website using C#

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • Getting a screenshot of a page using C# - Need help with code

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; // Code edited from example below to make smaller bitmap and save as PNG using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • BASH, multiple arrays and a loop.

    - by S1syphus
    At work, we 7 or 8 hardrives we dispatch over the country, each have unique labels which are not sequential. Ideally drives are plugged in our desktop, then gets folders from the server that correspond to the drive name. Sometimes, only one hard drive gets plugged in sometimes multiples, possibly in the future more will be added. Each is mounts to /Volumes/ and it's identifier; so for example /Volumes/f00, where f00 is the identifier. What I want to happen, scan volumes see if any any of the drives are plugged in, then checks the server to see if the folder exists, if ir does copy folder and recursive folders. Here is what I have so far, it checks if the drive exists in Volumes: #!/bin/sh #Declare drives in the array ARRAY=( foo bar long ) #Get the drives from the array DRIVES=${#ARRAY[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points #I plan to use AFP eventually, but for the sake of ease #using a local mount. ServerMount="BigBlue" #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #Loop through each item in the array and check if exists on /Volumes for (( i=0;i<$DRIVES;i++)); do dir="$BaseDir/${ARRAY[${i}]}" if [ -d "$dir" ]; then echo "$dir exists, you win." else echo "$dir is not attached." fi done What I can't figure out how to do, is how to check the volumes for the server while looping through the harddrive mount points. So I could do something like: #!/bin/sh #Declare drives, and folder location in arrays ARRAY=( foo bar long ) ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) #Get the drives from the array DRIVES=${#ARRAY[@]} SERVERFOLDER=${#ARRAY1[@]} #Define base dir to check BaseDir="/Volumes" #Define shared server fold on local mount points ServerMount="BigBlue #Define folder name for where files are to come from Dispatch="File-Dispatch" dir="$BaseDir/${ARRAY[${i}]}" #List the contents from server directory into array ARRAY1=($(ls ""$BaseDir"/"$ServerMount"/"$Dispatch"")) echo ${list[@]} for (( i=0;i<$DRIVES;i++)); (( i=0;i<$SERVERFOLDER;i++)); do dir="$BaseDir/${ARRAY[${i}]}" ser="${ARRAY1[${i}]}" if [ "$dir" =~ "$sir" ]; then cp "$sir" "$dir" else echo "$dir is not attached." fi done I know, that is pretty wrong... well very, but I hope it gives you the idea of what I am trying to achieve. Any ideas or suggestions?

    Read the article

  • Jquery Ajax Call In For Loop Only Runs Once - Possible Issue with Timing & Exit Condition?

    - by Grumps
    Background I'm building a form that uses autocomplete against the EchoNest API. First the users picks an artist, using the Artist Suggest call. Next they select a song but the Song and/Or Artist song search doesn't provide a "wild card" search. It only returns exact matches. So based on the forums they suggest building an array of the songs and using auto complete on the array. I can only get a maximum of 100 responses at a time. I do know based on the initial response the number of songs. My Plan: Wrap the ajax call in a for loop ('runonceloop'). Amend the loop exit condition after the first response with the total number of songs. Challenge I'm having: The 'runonceloop' only completes a singe loop because or at least that's what I believe: The exit condition is satisfied before the first response [1] is received. I've tried to adjust the 'exit condition' and 'counter' such that they are set and and increased at the end of the success block. This seems to lock up my browser. Can someone provide some guidance on this situation?[2] I'd really appreciate it. I also don't think turning async off is a good idea because it locks the browser. Response[1]: { "response": { "status": { "code": "0", "message": "Success", "version": "4.2" }, "start": 0, "total": 121, //Used for "songs": [ { "id": "SOXZYYG127F3E1B7A2", "title": "Karma police" }, { "id": "SOXZABD127F3E1B7A2", "title" : "Creep" } ] } } } Code[2] var songsList = []; function getSongs() { var numsongs = 2; //at least 2 runs. var startindex = 0; runonceloop: //<~~~~Referenced in question for (var j = 0;j >= numsongs;) { console.log('numsongs' + numsongs); $.ajax({ url: "http://developer.echonest.com/api/v4/artist/songs", dataType: "jsonp", data: { results: 100, api_key: "XXXXXXXXXXX", format: "jsonp", name: $("#artist").val(), start: startindex }, success: function (data) { var songs = data.response.songs; numsongs = data.response.total; //modify exit condition for (var i = 0; i < songs.length; i++) { songsList.push(songs[i].title); } j +=100;// increase by 100 to match number of responses. } }); }};

    Read the article

  • Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers

    - by user193655
    My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server. The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job). Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option. The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too. The idea is to use common code, may be with some compiler directives to write specific code: {$IFDEF CLIENTSERVER} {here goes the thick client specific code} {$ELSE} {here goes the Intraweb specific code} {$ENDIF} Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like: type IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}'] procedure UserAuthentication; end; In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire. My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option? I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50. EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • Delaying emails in PHP to avoid exceeding server limit

    - by Andrew P.
    Okay, so here's my problem: I have a list of members on a website, and periodically one of the admins my site (who are not very web or tech savvy) will send a newsletter to the memberlist. My current memberlist is well over 800 individuals long. So, I wrote an email script that sends the email to the full memberlist, with the members listed in the Bcc header. However, I've discovered that my host server has a limit of 300 emails per hour, which I apparently exceed even though the members are listed in the Bcc field. (I wasn't previously aware that the behaviour of Bcc was to send separate emails for each name on the list...) After some thought, I've come to the conclusion that my only solution is to have my script send only the email to only the first 300 emails, wait an hour, and send a second email to the next three hundred, wait another hour, and so on until I've sent the email to the whole member list. Looking around on the internet, I've seen some other solutions people have come up with for delaying emails in PHP. Sleep() is obviously not an option, because I can't just leave the script open and running for 3 or four hours. I've seen some people suggest cron jobs, but I'm not sure how feasible it would be to create three new cron jobs every time I send an email, use them once, and then delete them afterward. The final (and what I think is the smartest) solution I've seen, is to have a table in my database to temporarily store the emails to be delayed and sent later, and then create a cron job that checks this sql table every hour or so, compares the timestamp of the row to the current timestamp, and then sends the email if an hour has passed. So I'm asking you all which method you would recommend. Is there an easier solution that I've completely looked over (aside from getting a different hosting plan. ha!), or is there a cleaner way to do it than the database / cron job approach? tl;dr: I have 800 emails to send in an hour on a server that limits me to 300/hr. Using PHP, find a way to get around this problem in a way that the person sending the email needs only to click "send."

    Read the article

  • Getting a screenshot of a page using .NET - Need help with code

    - by Ender
    I'm writing a specialized crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; // Code edited from example below to make smaller bitmap and save as PNG using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

  • wget not behaving via IPC::Open3 vs bash

    - by Ryley
    I'm trying to stream a file from a remote website to a local command and am running into some problems when trying to detect errors. The code looks something like this: use IPC::Open3; my @cmd = ('wget','-O','-','http://10.10.1.72/index.php');#any website will do here my ($wget_pid,$wget_in,$wget_out,$wget_err); if (!($wget_pid = open3($wget_in,$wget_out,$wget_err,@cmd))){ print STDERR "failed to run open3\n"; exit(1) } close($wget_in); my @wget_outs = <$wget_out>; my @wget_errs = <$wget_err>; print STDERR "wget stderr: ".join('',@wget_errs); #page and errors outputted on the next line, seems wrong print STDERR "wget stdout: ".join('',@wget_outs); #clean up after this, not shown is running the filtering command, closing and waitpid'ing When I run that wget command directly from the command-line and redirect stderr to a file, something sane happens - the stdout will be the downloaded page, the stderr will contain the info about opening the given page. wget -O - http://10.10.1.72/index.php 2> stderr_test_file When I run wget via open3, I'm getting both the page and the info mixed together in stdout. What I expect is the loaded page in one stream and STDERR from wget in another. I can see I've simplified the code to the point where it's not clear why I want to use open3, but the general plan is that I wanted to stream stdout to another filtering program as I received it, and then at the end I was going to read the stderr from both wget and the filtering program to determine what, if anything went wrong. Other important things: I was trying to avoid writing the wget'd data to a file, then filtering that file to another file, then reading the output. It's key that I be able to see what went wrong, not just reading $? 8 (i.e. I have to tell the user, hey, that IP address is wrong, or isn't the right kind of website, or whatever). Finally, I'm choosing system/open3/exec over other perl-isms (i.e. backticks) because some of the input is provided by untrustworthy users.

    Read the article

  • Do you feel underappreciated or resent the geek/nerd stigma?

    - by dotnetdev
    At work we have a piece of A4 paper with the number of everyone in the office. The structure of this document is laid out in rectangles, by department. I work for the department that does all the technical stuff. That includes support—bear in mind that the support staff isn't educated in IT but just has experience in PC maintenance and providing support to a system we resell but don't have source code access to, project manager, team leader, a network administrator, a product manager, and me, a programmer. Anyway, on this paper, we are labelled as nerds and geeks. I did take a little offence to this, as much as it is light hearted (but annoying and old) humour. I have a vivid image that a geek is someone who doesn't go out but codes all day. I code all day at home and at work (when I have something to code...), but I keep balance by going out. I don't know why it is only people who work with computers that get such a stigma. No other profession really gets the same stigma—skilled, technical, or whatever. An account manager (and this is hardly a skilled job) says, "Perhaps [MY NAME HERE] could write some geeky code tomorrow to add this functionality to the website." It is funny how I get such an unfair stigma but I am so pivotal. In fact, if it wasn't for me, the company would have nothing to sell so the account managers would be redundant! I make systems, they get sold, and this is what pays the wages. It's funny how the account managers get a commission for how many systems they sell, or manage to make clients resubscribe to. Yet I built the thing in the first place! On top of that, my brother says all I do is type stuff on a keyboard all day. Surely if I did, I'd be typing at my normal typing speed of 100wpm+ as if I am writing a blog entry. Instead, I plan as I code along on the fly if commercial pressures and time prohibit proper planning. I never type as if I'm writing normal English. There is more to our jobs than just typing code. And my brother is a pipe fitter with no formal qualifications in his name. I could easily, and perhaps more justifiably, say he just manipulates a spanner or something. Does you feel underappreciated or that a geek/nerd stigma is undeserved or unfair?

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >