Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 572/724 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • Why does autoboxing in Java allow me to have 3 possible values for a boolean?

    - by John
    Reference: http://java.sun.com/j2se/1.5.0/docs/guide/language/autoboxing.html If your program tries to autounbox null, it will throw a NullPointerException. javac will give you a compile-time error if you try to assign null to a boolean. makes sense. assigning null to a Boolean is a-ok though. also makes sense, i guess. but let's think about the fact that you'll get a NPE when trying to autounbox null. what this means is that you can't safely perform boolean operations on Booleans without null-checking or exception handling. same goes for doing math operations on an Integer. for a long time, i was a fan of autoboxing in java1.5+ because I thought it got java closer to be truly object-oriented. but, after running into this problem last night, i gotta say that i think this sucks. the compiler giving me an error when I'm trying to do stuff with an uninitialized primitive is a good thing. I think I may be misunderstanding the point of autoboxing, but at the same time I will never accept that a boolean should be able to have 3 values. can anyone explain this? what am i not getting?

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • What's the relationship between the Intel Atom Developer Program and the MeeGo operating system?

    - by Arne Evertsson
    I'm trying to understand the relationship between the Intel Atom Developer Program (IADP) and the new OS called MeeGo. IADP let's me create applications that run on both MeeGo as well as Windows devices, as long as the device is based on the Atom processor. The IADP apps are published in an app store called AppUp, which is very much like the Apple App Store. The MeeGo operating system merges Intel's Moblin and Nokia's Maemo into one OS. The purpose seems to be to make it possible to develop software that will run on Intel powered devices, Nokia-made devices, as well devices from other companies. Nokia has its Ovi Store that will support MeeGo apps. With its OS independent runtime, the question is what an IADP app really is? Is an IADP app a beast of its own, or is it just a MeeGo app that has been restricted to run only on Atom powered devices? Will it be possible to recompile my IADP app to run on all MeeGo devices? Sold in Ovi Store? Intel and Nokia have me really confused. Where should I go as a developer?

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • Avoiding null point exception in el in JSF

    - by Buddhika Ariyaratne
    I am developing a JSF application with JPA(EclipseLink 2.0) and Primefaces. I want to know is there any way to avoid null point exception when el calls a property of a null object. I have described the situation. I have Bill class. There may be no or more BillItem objects with a Bill objects. Each BillItem object have Objects like Make, Country, Manufacturer, etc objects. I am displaying several properties of a bill within a single JSF file like this. "#{billControlled.bill.billItem.modal.name}" But if a bill is not selected, or when there are no bill items for a selected bill, the properties accessing in the el are null. I can avoid this by creating new objects for every bill, for example, new make for a new bill item, etc or by creating new properties in the controller itself for all the properties. But that is a very long way and feel like rudimentory. Is there any good practice to avoid this null point exception in el in JSF?

    Read the article

  • Django: Overriding the save() method: how do I call the delete() method of a child class

    - by Patti
    The setup = I have this class, Transcript: class Transcript(models.Model): body = models.TextField('Body') doPagination = models.BooleanField('Paginate') numPages = models.PositiveIntegerField('Number of Pages') and this class, TranscriptPages(models.Model): class TranscriptPages(models.Model): transcript = models.ForeignKey(Transcript) order = models.PositiveIntegerField('Order') content = models.TextField('Page Content', null=True, blank=True) The Admin behavior I’m trying to create is to let a user populate Transcript.body with the entire contents of a long document and, if they set Transcript.doPagination = True and save the Transcript admin, I will automatically split the body into n Transcript pages. In the admin, TranscriptPages is a StackedInline of the Transcript Admin. To do this I’m overridding Transcript’s save method: def save(self): if self.doPagination: #do stuff super(Transcript, self).save() else: super(Transcript, self).save() The problem = When Transcript.doPagination is True, I want to manually delete all of the TranscriptPages that reference this Transcript so I can then create them again from scratch. So, I thought this would work: #do stuff TranscriptPages.objects.filter(transcript__id=self.id).delete() super(Transcript, self).save() but when I try I get this error: Exception Type: ValidationError Exception Value: [u'Select a valid choice. That choice is not one of the available choices.'] ... and this is the last thing in the stack trace before the exception is raised: .../django/forms/models.py in save_existing_objects pk_value = form.fields[pk_name].clean(raw_pk_value) Other attempts to fix: t = self.transcriptpages_set.all().delete() (where self = Transcript from the save() method) looping over t (above) and deleting each item individually making a post_save signal on TranscriptPages that calls the delete method Any ideas? How does the Admin do it? UPDATE: Every once in a while as I'm playing around with the code I can get a different error (below), but then it just goes away and I can't replicate it again... until the next random time. Exception Type: MultiValueDictKeyError Exception Value: "Key 'transcriptpages_set-0-id' not found in " Exception Location: .../django/utils/datastructures.py in getitem, line 203 and the last lines from the trace: .../django/forms/models.py in _construct_form form = super(BaseInlineFormSet, self)._construct_form(i, **kwargs) .../django/utils/datastructures.py in getitem pk = self.data[pk_key]

    Read the article

  • Java Bucket Sort on Strings

    - by Michael
    I can't figure out what would be the best way to use Bucket Sort to sort a list of strings that will always be the same length. An algorithm would look like this: For the last character position down to the first: For each word in the list: Place the word into the appropriate bucket by current character For each of the 26 buckets(arraylists) Copy every word back to the list I'm writing in java and I'm using an arraylist for the main list that stores the unsorted strings. The strings will be five characters long each. This is what I started. It just abrubdly stops within the second for loop because I don't know what to do next or if I did the first part right. ArrayList<String> count = new ArrayList<String>(26); for (int i = wordlen; i > 0; i--) { for (int j = 0; i < myList.size(); i++) myList.get(j).charAt(i) } Thanks in advanced.

    Read the article

  • Simple Java Applet not loading in FireFox / Safari on MacOS

    - by Sleepless
    Hello all, I'm probably missing something very basic here. I'm trying to get my first applet to run inside a local HTML page in Firefox 3.6 on Mac OS 10.5.8. Here's the applet's code: package SimpleApplet; import java.applet.Applet; import java.awt.*; public class MyApplet extends Applet { private static final long serialVersionUID = 1L; public void init() { } public void stop() { } public void paint(Graphics g) { g.drawString("Tweedle-Dee!",20,40); } } Here's the HTML page: <html> <body> Here's the applet: <br/> <applet code="MyApplet.class" width="300" height="150"> </applet> </body> </html> Both files (.class and .html) are in the same folder on my local machine. Now when I load the .html file into Firefox, a rectangle with a red X gets displayed. The applet works when started from Eclipse (using JRE 1.5 BTW). Also, it's not a general problem with my browser, as several pages with applets (e.g. http://java.sun.com/applets/jdk/1.4/demo/applets/Blink/example1.html) work. This is also difficult to troubleshoot because there is no output at all on the Java console... Any suggestions are appreciated!

    Read the article

  • [UNIX] Sort lines of massive file by number of words on line (ideally in parallel)

    - by conradlee
    I am working on a community detection algorithm for analyzing social network data from Facebook. The first task, detecting all cliques in the graph, can be done efficiently in parallel, and leaves me with an output like this: 17118 17136 17392 17064 17093 17376 17118 17136 17356 17318 12345 17118 17136 17356 17283 17007 17059 17116 Each of these lines represents a unique clique (a collection of node ids), and I want to sort these lines in descending order by the number of ids per line. In the case of the example above, here's what the output should look like: 17118 17136 17356 17318 12345 17118 17136 17356 17283 17118 17136 17392 17064 17093 17376 17007 17059 17116 (Ties---i.e., lines with the same number of ids---can be sorted arbitrarily.) What is the most efficient way of sorting these lines. Keep the following points in mind: The file I want to sort could be larger than the physical memory of the machine Most of the machines that I'm running this on have several processors, so a parallel solution would be ideal An ideal solution would just be a shell script (probably using sort), but I'm open to simple solutions in python or perl (or any language, as long as it makes the task simple) This task is in some sense very easy---I'm not just looking for any old solution, but rather for a simple and above all efficient solution

    Read the article

  • Why should I use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Refining data stored in SQLite - how to join several contacts?

    - by Krab
    Problem background Imagine this problem. You have a water molecule which is in contact with other molecules (if the contact is a hydrogen bond, there can be 4 other molecules around my water). Like in the following picture (A, B, C, D are some other atoms and dots mean the contact). A B . . O / \ H H . . C D I have the information about all the dots and I need to eliminate the water in the center and create records describing contacts of A-C, A-D, A-B, B-C, B-D, and C-D. Database structure Currently, I have the following structure in the database: Table atoms: "id" integer PRIMARY KEY, "amino" char(3) NOT NULL, (HOH for water or other value) other columns identifying the atom Table contacts: "acceptor_id" integer NOT NULL, (the atom near to my hydrogen, here C or D) "donor_id" integer NOT NULL, (here A or B) "directness" char(1) NOT NULL, (this should be D for direct and W for water-mediated) other columns about the contact, such as the distance Current solution (insufficient) Now, I'm going through all the contacts which have donor.amino = "HOH". In this sample case, this would select contacts from C and D. For each of these selected contacts, I look up contacts having the same acceptor_id as is the donor_id in the currently selected contact. From this information, I create the new contact. At the end, I delete all contacts to or from HOH. This way, I am obviously unable to create C-D and A-B contacts (the other 4 are OK). If I try a similar approach - trying to find two contacts having the same donor_id, I end up with duplicate contacts (C-D and D-C). Is there a simple way to retrieve all six contacts without duplicates? I'm dreaming about some one page long SQL query which retrievs just these six wanted rows. :-) It is preferable to conserve information about who is donor where possible, but not strictly necessary. Big thanks to all of you who read this question to this point.

    Read the article

  • Munging non-printable characters to dots using string.translate()

    - by Jim Dennis
    So I've done this before and it's a surprising ugly bit of code for such a seemingly simple task. The goal is to translate any non-printable character into a . (dot). For my purposes "printable" does exclude the last few characters from string.printable (new-lines, tabs, and so on). This is for printing things like the old MS-DOS debug "hex dump" format ... or anything similar to that (where additional whitespace will mangle the intended dump layout). I know I can use string.translate() and, to use that, I need a translation table. So I use string.maketrans() for that. Here's the best I could come up with: filter = string.maketrans( string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]), '.'*len(string.translate(string.maketrans('',''), string.maketrans('',''),string.printable[:-5]))) ... which is an unreadable mess (though it does work). From there you can call use something like: for each_line in sometext: print string.translate(each_line, filter) ... and be happy. (So long as you don't look under the hood). Now it is more readable if I break that horrid expression into separate statements: ascii = string.maketrans('','') # The whole ASCII character set nonprintable = string.translate(ascii, ascii, string.printable[:-5]) # Optional delchars argument filter = string.maketrans(nonprintable, '.' * len(nonprintable)) And it's tempting to do that just for legibility. However, I keep thinking there has to be a more elegant way to express this!

    Read the article

  • Problem with writing a hexadecimal string

    - by quilby
    Here is my code /* gcc -c -Wall -g main.c gcc -g -lm -o main main.o */ #include <stdlib.h> #include <stdio.h> #include <string.h> void stringToHex(const char* string, char* hex) { int i = 0; for(i = 0; i < strlen(string)/2; i++) { printf("s%x", string[2*i]); //for debugging sprintf(&hex[i], "%x", string[2*i]); printf("h%x\n", hex[i]); //for debugging } } void writeHex(char* hex, int length, FILE* file, long position) { fseek(file, position, SEEK_SET); fwrite(hex, sizeof(char), length, file); } int main(int argc, char** argv) { FILE* pic = fopen("hi.bmp", "w+b"); const char* string = "f2"; char hex[strlen(string)/2]; stringToHex(string, hex); writeHex(hex, strlen(string)/2, pic, 0); fclose(pic); return 0; } I want it to save the hexadecimal number 0xf2 to a file (later I will have to write bigger/longer numbers though). The program prints out - s66h36 And when I use hexedit to view the file I see the number '36' in it. Why is my code not working? Thanks!

    Read the article

  • Using ARIMA to model and forecast stock prices using user-friendly stats program

    - by Brian
    Hi people, Can anyone please offer some insight into this for me? I'm coming from a functional magnetic resonance imaging research background where I analyzed a lot of time series data, and I'd like to analyze the time series of stock prices (or returns) by: 1) modeling a successful stock in a particular market sector and then cross-correlating the time series of this historically successful stock with that of other newer stocks to look for significant relationships; 2) model a stock's price time series and use forecasting (e.g., exponential smoothing) to predict future values of it. I'd like to use non-linear modeling methods (ARIMA and ARCH) to do this. Several questions: How often do ARIMA and ARCH modeling methods (given that the individual who implements them does so accurately) actually fit the stock time series data they target, and what is the optimal fit I can expect? Is the extent to which this model fits the data commensurate with the extent to which it predicts this stock time series' future values? Rather than randomly selecting stocks to compare or model, if profit is my goal, what is an efficient approach, if any, to selecting the stocks I'm going to analyze? Which stats program is the most user-friendly for this? Any thoughts on this would be great and would go a long way for me. Thanks, Brian

    Read the article

  • Delay before playing embedded mp3 in Actionscript / Flex 3

    - by lacker
    I am embedding an mp3 into my Flex project for use as a sound effect, but I am finding that every time I play it, there is a delay of about half a second from when I call .play() to when you can hear the sound. This makes it weird because I want the sound effects to sync to game events. My mp3 itself is only about a fifth of a second long so it isn't because of the contents of the mp3. I'm embedding with [Embed(source="assets/Tock.mp3")] [Bindable] public static var TockSound:Class; public var tock_sound:SoundAsset; and then playing with if (tock_sound == null) { tock_sound = new TockSound() as SoundAsset; } Alert.show("tock"); tock_sound.play(); I know there's a delay because the sound plays about a half second after the Alert displays. I did consider that maybe it was the initial loading time of constructing the TockSound, but the delay is there on all the subsequent calls as well. How can I avoid this delay on playing a sound? Update: It turns out this delay is only present when playing the swf on Linux. I believe it is a Linux-specific flaw in Adobe's flash player.

    Read the article

  • Android maps out of memory error

    - by SamB09
    Hi , sometimes when running a google maps program with an overlay image i will receive a bit map out of memory error. It always seems to be at a random point in the app. Im not sure how to solve this. Anyone have any ideas ? My overlay code is below , im not sure if you need to see the class its called in though? public class MyOverlay2 extends Overlay { private static final double MAX_TAP_DISTANCE_KM = 3; // Rough approximation - one degree = 50 nautical miles private static final double MAX_TAP_DISTANCE_DEGREES = MAX_TAP_DISTANCE_KM * 0.5399568 * 50; private final GeoPoint gPoint; private final Context cont; private final int draw; // private final int lat; public MyOverlay2(Context cont, GeoPoint gPoint1, int draw) { // constructor will be called in the userLocation class to draw an overly image this.cont = cont; this.gPoint = gPoint1; this.draw = draw; } @Override public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { // constructor takes 3 arguments super.draw(canvas, mapView, shadow); // Convert geo coordinates to screen pixels Point screenPoint = new Point(); mapView.getProjection().toPixels(gPoint, screenPoint); //Read the image from the xml resource using a bitmap factory BitmapFactory.Options options=new BitmapFactory.Options(); options.inSampleSize = 1; Bitmap preview_bitmap=BitmapFactory.decodeResource(cont.getResources(),R.drawable.monday12,options); //draw the image at the location specified by the co-ordinates canvas.drawBitmap(preview_bitmap, screenPoint.x - preview_bitmap.getWidth() /2, screenPoint.y - preview_bitmap.getHeight()/2 , null); // get the images height and width values divided by two draw the image at the specified screen points return true; } @Override public boolean onTap(GeoPoint s, MapView mapView) { // Handle tapping on the overlay here return true; } }

    Read the article

  • Debian packaging of a Python package.

    - by chrisdew
    I need to write (or find) a script to create a Debian package (using python-support) from a Python package. The Python package will be pure Python (no C extensions). The Python package (for testing purposes) will just be a directory with an empty __init__.py file and a single Python module, package_test.py. The packaging script must use python-support to provide the correct bytecode for possible multiple installations of Python on a target platform. (i.e. v2.5 and v2.6 on Ubuntu Jaunty.) Most advice I find while googling are just examples nasty hacks that don't even use python-support or python-central. I have so far spent hours researching this, and the best I can come up with is to hack around the script from an existing open source project - but I don't know which bits are required for what I'm doing. Has anyone here made a Debian package out of a Python package in a reasonably non-hacky way? I'm starting to think that it will take me more than a week to go from no knowledge of Debian packaging and python-support to getting a working script. How long has it taken others? Any advice? Chris.

    Read the article

  • PHP preg_replace - Don't match within h1 tags

    - by James
    Hi there. I am using preg_replace to add a link to keywords if they are found within a long HTML string. I don't want to add a link if the keyword is found within h1 tags or strong tags. The below regex nearly works and basically says (I think): If the keyword is not immediately wrapped by either a h1 tag or a strong tag then replace with the keyword that was matched, as a bolded link to google. $result = preg_replace('%(?!<h1>)(?!<strong>)\b(bobs widgets)\b(?!<\/strong>)(?!<\/h1>)%i','<a href="http://www.google.com"><strong>$1</strong></a>', $result, -1); (the reason I don't want to match if in strong tags is because I am recursing through a lot of keywords so don't want to link an already linked keyword on subsequent passes) the above works fine and won't match: <h1>bobs widgets</h1> It will however match the keyword in the following text, because the h1 tag isn't immediately either side of the keyword: <h1>Here are bobs widgets for sale</h1> I need to make the spaces either side optional and have tried adding \s* but that doesn't get me anywhere. I'd be very grateful for a push in the right direction here.

    Read the article

  • How to react when asked a question you already know during an interview

    - by DevNull
    The short story:- If you are asked a tough algorithmic/puzzle question during an interview, whose solution is already known to you, do you:- Honestly tell the interviewer that you know this question already? -- this could result in bursting the interviewer's ego and him increasing the complexity level of the subsequent questions. Do an Oscar deserving performance and act as if you are thinking and trying hard and slowly getting to the solution? -- depending on your acting skills, could majorly impress the interviewer making the rest of the interview easier. Long story:- OK, this question comes as a result of what happened to me in a recent telephonic interview that I gave - the interview was supposed to be all algorithmic. The interviewer started with an algorithmic question which I had luckily already seen here on Stackoverflow. The best solution to that problem is not very intuitive and is more of a you-get-it-if-you-know-it kind. Now, just to not disappoint the interviewer too much, I took a few seconds as if I was pondering on the problem and then blurted out the answer which I knew too well having read and admired it on SO already. But I guess that gave it away to the interviewer that I already knew this question and since then, he started asking me for more efficient solutions and I kept coming up with approaches (even if not correct or more efficient, but I did touch a lot of different data structures and algos) and he kept asking for more efficient solutions and generally seemed put off by my initial salvo which was unexpected. What should I have done? Cheers!

    Read the article

  • What is the fastest collection in c# to implement a prioritizing queue?

    - by Nathan Smith
    I need to implement a queue for messages on a game server so it needs to as fast as possible. The queue will have a maxiumem size. I need to prioritize messages once the queue is full by working backwards and removing a lower priority message (if one exists) before adding the new message. The appliation is asynchronous so access to the queue needs to be locked. I'm currently implementing it using a LinkedList as the underlying storage but have concerns that searching and removing nodes will keep it locked for too long. Heres the basic code I have at the moment: public class ActionQueue { private LinkedList<ClientAction> _actions = new LinkedList<ClientAction>(); private int _maxSize; /// <summary> /// Initializes a new instance of the ActionQueue class. /// </summary> public ActionQueue(int maxSize) { _maxSize = maxSize; } public int Count { get { return _actions.Count; } } public void Enqueue(ClientAction action) { lock (_actions) { if (Count < _maxSize) _actions.AddLast(action); else { LinkedListNode<ClientAction> node = _actions.Last; while (node != null) { if (node.Value.Priority < action.Priority) { _actions.Remove(node); _actions.AddLast(action); break; } } } } } public ClientAction Dequeue() { ClientAction action = null; lock (_actions) { action = _actions.First.Value; _actions.RemoveFirst(); } return action; } }

    Read the article

  • JPA2 Criteria API creates invalid SQL when using groupBy

    - by Stephan
    JPA2 with the Criteria API seems to generate invalid SQL for PostgreSQL. For this code: Root<DBObjectAccessCounter> from = query.from(DBObjectAccessCounter.class); Path<DBObject> object = from.get(DBObjectAccessCounter_.object); Expression<Long> sum = builder.sumAsLong(from.get(DBObjectAccessCounter_.count)); query.multiselect(object, sum).groupBy(object); I get the following exception: ERROR: column "dbobject1_.id" must appear in the GROUP BY clause or be used in an aggregate function The generated SQL is: select dbobjectac0_.object_id as col_0_0_, sum(dbobjectac0_.count) as col_1_0_, dbobject1_.id as id1001_, dbobject1_.name as name1013_, dbobject1_.lastChanged as lastChan2_1013_, dbobject1_.type_id as type3_1013_ from DBObjectAccessCounter dbobjectac0_ inner join DBObject dbobject1_ on dbobjectac0_.object_id=dbobject1_.id group by dbobjectac0_.object_id Obviously, the first item of the select statement (dbobjectac0_.object_id) does not match the group by clause. Simplified example It does not even work for this simple example: Root<DBObjectAccessCounter> from = query.from(DBObjectAccessCounter.class); Path<DBObject> object = from.get(DBObjectAccessCounter_.object); query.select(object).groupBy(object); which returns select dbobject1_.id as id924_, dbobject1_.name as name933_, dbobject1_.lastChanged as lastChan2_933_, dbobject1_.type_id as type3_933_ from DBObjectAccessCounter dbobjectac0_ inner join DBObject dbobject1_ on dbobjectac0_.object_id=dbobject1_.id group by dbobjectac0_.object_id Does anyone know how to fix this?

    Read the article

  • How to configure database connection securely

    - by chiccodoro
    Similar but not the same: How to securely store database connection details Securely connecting to database within a application Hi all, I have a C# WinForms application connecting to a database server. The database connection string, including a generic user/pass, is placed in a NHibernate configuration file, which lies in the same directory as the exe file. Now I have this issue: The user that runs the application should not get to know the username/password of the general database user because I don't want him to rummage around in the database directly. Alternatively I could hardcode the connection string, which is bad because the administrator must be able to change it if the database is moved or if he wants to switch between dev/test/prod environments. So long I've found three possibilities: The first referenced question was generally answered by making the file only readable for the user that runs the application. But that's not not enough in my case (the user running the application is a person. The database user/pass are general and shouldn't even be accessible by the person.) The first answer additionally proposed to encrypt the connection data before writing it to the file. With this approach, the administrator is not able anymore to configure the connection string because he cannot encrypt it by hand. The second referenced question provides an approach for this very scenario but it seems very complicated. My questions to you: This is a very general issue, so isn't there any general "how-to-do-it" way, somehow a "design pattern"? Is there some support in .NET's config infrastructure? (optional, maybe out of scope) Can I combine that easily with the NHibernate configuration mechanism?

    Read the article

  • Control pdb file output from build defintion file

    - by Urvi
    Hello, I am trying to generate a release build with no pdb files generated. I have seen numerous posts that suggest right-clicking on the project, selecting Properties, going to the Build tab and then to the Advanced... butoon and changing Debug Info to none. This works and all, but I need to do this for a build of ~50 solutions which contain ~25 projects each! Other posts mention editing the appropriate .csproj file, but again, with so many projects, this would take a long time. Is there any way to achieve this via the TFSBuild.proj file? I have tried adding the following to the TFSBuild.proj file, with no luck. <PropertyGroup> <Configuration>Release</Configuration> <Platform>AnyCPU</Platform> </PropertyGroup> <PropertyGroup> <DebugSymbols>false</DebugSymbols> <DebugType>none</DebugType> <Optimize>true</Optimize> </PropertyGroup> The following line prints out Release|AnyCPU, none, and false, but I still see .pdb file in the $(OutputDir) folder. <Message Text="$Configuration|Platform): $(Configuration)|$(Platform)" /> <Message Text="DebugType is: $(DebugType)"/> <Message Text="DebugSymbols is: $(DebugSymbols)"/> Thanks in advance, Urvi

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >