Search Results

Search found 1372 results on 55 pages for 'sha 512'.

Page 42/55 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Setting project for eclipse using maven

    - by egaga
    Hi, I'm trying to start modifying an existing application with Eclipse. Actually I had it working before, but I deleted the project, and now with "mvn eclipse:eclipse" I get the following: [INFO] Resource directory's path matches an existing source directory. Resources will be merged with the source directory src/main/resources [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:583) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:512) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:482) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:330) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:291) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) at org.apache.maven.cli.MavenCli.main(MavenCli.java:287) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xm l], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.plugin.eclipse.EclipseSourceDir.merge(EclipseSourceDir.java:302) at org.apache.maven.plugin.eclipse.EclipsePlugin.extractResourceDirs(EclipsePlugin.java:1605) at org.apache.maven.plugin.eclipse.EclipsePlugin.buildDirectoryList(EclipsePlugin.java:1490) at org.apache.maven.plugin.eclipse.EclipsePlugin.createEclipseWriterConfig(EclipsePlugin.java:1180) at org.apache.maven.plugin.eclipse.EclipsePlugin.writeConfiguration(EclipsePlugin.java:1043) at org.apache.maven.plugin.ide.AbstractIdeSupportMojo.execute(AbstractIdeSupportMojo.java:511) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:451) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:558) ... 16 more

    Read the article

  • Aggregating, restructuring hourly time series data in R

    - by Advait Godbole
    I have a year's worth of hourly data in a data frame in R: > str(df.MHwind_load) # compactly displays structure of data frame 'data.frame': 8760 obs. of 6 variables: $ Date : Factor w/ 365 levels "2010-04-01","2010-04-02",..: 1 1 1 1 1 1 1 1 1 1 ... $ Time..HRs. : int 1 2 3 4 5 6 7 8 9 10 ... $ Hour.of.Year : int 1 2 3 4 5 6 7 8 9 10 ... $ Wind.MW : int 375 492 483 476 486 512 421 396 456 453 ... $ MSEDCL.Demand: int 13293 13140 12806 12891 13113 13802 14186 14104 14117 14462 ... $ Net.Load : int 12918 12648 12323 12415 12627 13290 13765 13708 13661 14009 ... While preserving the hourly structure, I would like to know how to extract a particular month/group of months the first day/first week etc of each month all mondays, all tuesdays etc of the year I have tried using "cut" without result and after looking online think that "lubridate" might be able to do so but haven't found suitable examples. I'd greatly appreciate help on this issue.

    Read the article

  • Can per-user randomized salts be replaced with iterative hashing?

    - by Chas Emerick
    In the process of building what I'd like to hope is a properly-architected authentication mechanism, I've come across a lot of materials that specify that: user passwords must be salted the salt used should be sufficiently random and generated per-user ...therefore, the salt must be stored with the user record in order to support verification of the user password I wholeheartedly agree with the first and second points, but it seems like there's an easy workaround for the latter. Instead of doing the equivalent of (pseudocode here): salt = random(); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword, salt); Why not use the hash of the username as the salt? This yields a domain of salts that is well-distributed, (roughly) random, and each individual salt is as complex as your salt function provides for. Even better, you don't have to store the salt in the database -- just regenerate it at authentication-time. More pseudocode: salt = hash(username); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword); (Of course, hash in the examples above should be something reasonable, like SHA-512, or some other strong hash.) This seems reasonable to me given what (little) I know of crypto, but the fact that it's a simplification over widely-recommended practice makes me wonder whether there's some obvious reason I've gone astray that I'm not aware of.

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • Basic image resizing in Ruby on Rails

    - by Koning Baard XIV
    I'm creating a little photo sharing site for our home's intranet, and I have an upload feature, which uploads the photo at original size into the database. However, I also want to save the photo in four other sizes: W=1024, W=512, W=256 and W=128, but only the sizes smaller than the original size (e.g. if the original width is 511, only generate 256 and 128). How can I implement this? I already have this code to upload the photo: pic.rb <-- model def image_file=(input_data) self.filename = input_data.original_filename self.content_type = input_data.content_type.chomp self.binary_data = input_data.read # here it should generate the smaller sizes #+and save them to self.binary_data_1024, etc... end new.rb <-- view <h1>New pic</h1> <% form_for(@pic, :html => {:multipart => true}) do |f| %> <%= f.error_messages %> <p> <%= f.label :title %><br /> <%= f.text_field :title %> </p> <p> <%= f.label :description %><br /> <%= f.text_field :description %> </p> <p> <%= f.label :image_file %><br /> <%= f.file_field :image_file %> </p> <p> <%= f.submit 'Create' %> </p> <% end %> <%= link_to 'Back', pics_path %> Thanks

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • Why this base64 function stop working when increasing max length?

    - by flyout
    I am using this class to encode/decode text to base64. It works fine with MAX_LEN up to 512 but if I increase it to 1024 the decode function returns and empty var. This is the function: char* Base64::encode(char *src) { char* ptr = dst+0; unsigned triad; unsigned int d_len = MAX_LEN; memset(dst,'\0', MAX_LEN); unsigned s_len = strlen(src); for (triad = 0; triad < s_len; triad += 3) { unsigned long int sr = 0; unsigned byte; for (byte = 0; (byte<3)&&(triad+byte<s_len); ++byte) { sr <<= 8; sr |= (*(src+triad+byte) & 0xff); } sr <<= (6-((8*byte)%6))%6; // shift left to next 6bit alignment if (d_len < 4) return NULL; // error - dest too short *(ptr+0) = *(ptr+1) = *(ptr+2) = *(ptr+3) = '='; switch(byte) { case 3: *(ptr+3) = base64[sr&0x3f]; sr >>= 6; case 2: *(ptr+2) = base64[sr&0x3f]; sr >>= 6; case 1: *(ptr+1) = base64[sr&0x3f]; sr >>= 6; *(ptr+0) = base64[sr&0x3f]; } ptr += 4; d_len -= 4; } return dst; } Why could be causing this?

    Read the article

  • Scaling Image to multiple sizes for Deep Zoom

    - by AnthonyWJones
    Lets assume I have a bitmap with a square aspect and width of 2048 pixels. In order to create a set of files need by Silverlight's DeepZoomImageTileSource I need to scale this bitmap to 1024 then to 512 then to 256 etc down to 1 pixel image. There are two, I suspect naive, approaches:- For each image required scale the original full size image to the required size. However it seems excessive to be scaling the full image to the very small sizes. Having scaled from one level to the next discard the original image and scale each sucessive scaled image as the source of the next smaller image. However I suspect that this would generate images in the 256-64 range with poor fidelity than using option 1. Note unlike with the Deep Zoom Composer this tool is expected to act in an on-demand fashion hence it needs to complete in a reasonable timeframe (tops 30 seconds). On the pluse side I'm only creating a single multiscale image not a pyramid of mutliple high-res images. I am outside my comfort zone here, any graphics experts got any advice? Am I wrong about point 2? Is point 1 reasonably performant and I'm worrying about nothing? Option 3?

    Read the article

  • Large flags enumerations in C#

    - by LorenVS
    Hey everyone, got a quick question that I can't seem to find anything about... I'm working on a project that requires flag enumerations with a large number of flags (up to 40-ish), and I don't really feel like typing in the exact mask for each enumeration value: public enum MyEnumeration : ulong { Flag1 = 1, Flag2 = 2, Flag3 = 4, Flag4 = 8, Flag5 = 16, // ... Flag16 = 65536, Flag17 = 65536 * 2, Flag18 = 65536 * 4, Flag19 = 65536 * 8, // ... Flag32 = 65536 * 65536, Flag33 = 65536 * 65536 * 2 // right about here I start to get really pissed off } Moreover, I'm also hoping that there is an easy(ier) way for me to control the actual arrangement of bits on different endian machines, since these values will eventually be serialized over a network: public enum MyEnumeration : uint { Flag1 = 1, // BIG: 0x00000001, LITTLE:0x01000000 Flag2 = 2, // BIG: 0x00000002, LITTLE:0x02000000 Flag3 = 4, // BIG: 0x00000004, LITTLE:0x03000000 // ... Flag9 = 256, // BIG: 0x00000010, LITTLE:0x10000000 Flag10 = 512, // BIG: 0x00000011, LITTLE:0x11000000 Flag11 = 1024 // BIG: 0x00000012, LITTLE:0x12000000 } So, I'm kind of wondering if there is some cool way I can set my enumerations up like: public enum MyEnumeration : uint { Flag1 = flag(1), // BOTH: 0x80000000 Flag2 = flag(2), // BOTH: 0x40000000 Flag3 = flag(3), // BOTH: 0x20000000 // ... Flag9 = flag(9), // BOTH: 0x00800000 } What I've Tried: // this won't work because Math.Pow returns double // and because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Math.Pow(2, 0), Flag2 = Math.Pow(2, 1) } // this won't work because C# requires constants for enum values public enum MyEnumeration : uint { Flag1 = Masks.MyCustomerBitmaskGeneratingFunction(0) } // this is my best solution so far, but is definitely // quite clunkie public struct EnumWrapper<TEnum> where TEnum { private BitVector32 vector; public bool this[TEnum index] { // returns whether the index-th bit is set in vector } // all sorts of overriding using TEnum as args } Just wondering if anyone has any cool ideas, thanks!

    Read the article

  • Creating an SQL variable character column > 255 characters supporting multiple databases

    - by Piers
    I have an application that stores data through an ODBC data source of the user's choosing. So far it has worked well on a range of database systems (e.g. JET, Oracle, SQL Server), as the SQL syntax is fairly simple. Now I am running into a problem where I need to store more than 255 characters in my strings. Previously I created the table using column type VARCHAR (255). Now if I try to create a table using, e.g. VARCHAR (512) then it falls over on Access databases. I know that I can use the MEMO type for Access, but this is non-standard SQL and will thus likely fail on other database systems (e.g. Oracle). Is there any widely supported SQL standard for creating text columns wider than 255 characters, or do I need to find another solution? The alternatives seem to me to be: 1) Profile the database system and customise the SQL CREATE TABLE command based on the database system. I don't like this as it defeats the purpose of using ODBC. 2) Add extra columns of 255 chars as required (e.g. LONGSTRING1, LONGSTRING2, ...) and concatenate after reading. I don't like this because it means the number of columns can vary between tables and it complicates read/write. Are there any other viable alternatives to these two options? Or is it possible to have an SQL compliant CREATE TABLE command supported by the majority of database vendors, that supports strings longer than 255 chars?

    Read the article

  • Delphi Pascal / Windows API - Small problem with SetFilePointerEx and parameter FILE_END

    - by SuicideClutchX2
    I know I am about to be slapped by at least one person who was helping me with this API. Alright I have been able to use SetFilePointerEx just fine, when setting the position only. SetFilePointerEx(PD,512,@PositionVar,FILE_BEGIN); SetFilePointerEx(PD,0,@PositionVar,FILE_CURRENT); Both work, I can set positions and even check my current one. But when I set FILE_END as per the documentation no matter what the second parameter is and whether or not i provide a pointer for the third parameter it fails even on a valid handle that many other operations are able to use without fail. For Example: SetFailed := SetFilePointerEx(PD,0,@PositionVar,FILE_END); SetFailed := SetFilePointerEx(PD,0,nil,FILE_END); Whatever I put it fails. I am working with a handle to a physical disk and it most definitely has an end. SetFilePointer works just fine its just a little more trouble than I would like. Its not the end of the world, but whats happening.

    Read the article

  • How to initialize audio with Vala/SDL

    - by ioev
    I've been trying to figure this out for a few hours now. In order to start up the audio, I need to create an SDL.AudioSpec object and pass it to SDL.Audio.Open. The problem is, AudioSpec is a class with a private constructor, so when I try to create one I get: sdl.vala:18.25-18.43: error: `SDL.AudioSpec' does not have a default constructor AudioSpec audiospec = new SDL.AudioSpec(); ^^^^^^^^^^^^^^^^^^^ And if I try to just assign values to it's member vars like a struct (it's a struct in normal sdl) I get: sdl.vala:20.3-20.25: error: use of possibly unassigned local variable `audiospec' audiospec.freq = 22050; ^^^^^^^^^^^^^^^^^^^^^^^ I found the valac doc here: http://valadoc.org/sdl/SDL.AudioSpec.html But it isn't much help at all. The offending code block looks like this: // setup the audio configuration AudioSpec audiospec; AudioSpec specback; audiospec.freq = 22050; audiospec.format = SDL.AudioFormat.S16LSB; audiospec.channels = 2; audiospec.samples = 512; // try to initialize sound with these values if (SDL.Audio.open(audiospec, specback) < 0) { stdout.printf("ERROR! Check audio settings!\n"); return 1; } Any help would be greatly appreciated!

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • Django: Applying Calculations To A Query Set

    - by TheLizardKing
    I have a QuerySet that I wish to pass to a generic view for pagination: links = Link.objects.annotate(votes=Count('vote')).order_by('-created')[:300] This is my "hot" page which lists my 300 latest submissions (10 pages of 30 links each). I want to now sort this QuerySet by an algorithm that HackerNews uses: (p - 1) / (t + 2)^1.5 p = votes minus submitter's initial vote t = age of submission in hours Now because applying this algorithm over the entire database would be pretty costly I am content with just the last 300 submissions. My site is unlikely to be the next digg/reddit so while scalability is a plus it is required. My question is now how do I iterate over my QuerySet and sort it by the above algorithm? For more information, here are my applicable models: class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Notes: I don't have "downvotes" so just the presence of a Vote row is an indicator of a vote or a particular link by a particular user.

    Read the article

  • Mercurial: applying changes one by one to resolve merging issues

    - by Webinator
    I recently tried to merge a series of changeset and encountered a huge number of merging issues. Hence I'd like to to try to apply each changeset, in order, one by one, in order to make the merging issues easier to manage. I'll give an example with 4 problematic changesets (514,515,516 and 517) [in my real case, I've got a bit more than that] o changeset: 517 | o changeset: 516 | o changeset: 515 | o changeset: 514 | | | @ changeset: 513 | | | o changeset: 512 | | | o | | | o | | | o |/ | | o changeset 508 Note that I've got clones of my repos before pulling the problematic changesets. When I pull the 4 changesets and try a merge, things are too complicated to resolve. So I wanted to pull only changeset 514, then merge. Then once I solve the merging issue, pull only changeset 515 and apply it, etc. (I know the numbering shall change, this is not my problem here). How am I supposed to do that, preferably without using any extension? (because I'd like to understand Mercurial and what I'm doing better). Is the way to go generate a patch between 508 and 514 and apply that patch? (if so, how would I generate that patch) Answers including concrete command-line example(s) most welcome :)

    Read the article

  • Why is my simple python gtk+cairo program running so slowly/stutteringly?

    - by synapz
    My program draws circles moving on the window. I think I must be missing some basic gtk/cairo concept because it seems to be running too slowly/stutteringly for what I am doing. Any ideas? Thanks for any help! #!/usr/bin/python import gtk import gtk.gdk as gdk import math import random import gobject # The number of circles and the window size. num = 128 size = 512 # Initialize circle coordinates and velocities. x = [] y = [] xv = [] yv = [] for i in range(num): x.append(random.randint(0, size)) y.append(random.randint(0, size)) xv.append(random.randint(-4, 4)) yv.append(random.randint(-4, 4)) # Draw the circles and update their positions. def expose(*args): cr = darea.window.cairo_create() cr.set_line_width(4) for i in range(num): cr.set_source_rgb(1, 0, 0) cr.arc(x[i], y[i], 8, 0, 2 * math.pi) cr.stroke_preserve() cr.set_source_rgb(1, 1, 1) cr.fill() x[i] += xv[i] y[i] += yv[i] if x[i] > size or x[i] < 0: xv[i] = -xv[i] if y[i] > size or y[i] < 0: yv[i] = -yv[i] # Self-evident? def timeout(): darea.queue_draw() return True # Initialize the window. window = gtk.Window() window.resize(size, size) window.connect("destroy", gtk.main_quit) darea = gtk.DrawingArea() darea.connect("expose-event", expose) window.add(darea) window.show_all() # Self-evident? gobject.idle_add(timeout) gtk.main()

    Read the article

  • Django: Determining if a user has voted or not

    - by TheLizardKing
    I have a long list of links that I spit out using the below code, total votes, submitted by, the usual stuff but I am not 100% on how to determine if the currently logged in user has voted on a link or not. I know how to do this from within my view but do I need to alter my below view code or can I make use of the way templates work to determine it? I have read http://stackoverflow.com/questions/1528583/django-vote-up-down-method but I don't quite understand what's going on ( and don't need any ofjavascriptery). Models (snippet): class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Views (snippet): def hot(request): links = Link.objects.select_related().annotate(votes=Count('vote')).order_by('-created') for link in links: delta_in_hours = (int(datetime.now().strftime("%s")) - int(link.created.strftime("%s"))) / 3600 link.popularity = ((link.votes - 1) / (delta_in_hours + 2)**1.5) if request.user.is_authenticated(): try: link.voted = Vote.objects.get(link=link, user=request.user) except Vote.DoesNotExist: link.voted = None links = sorted(links, key=lambda x: x.popularity, reverse=True) links = paginate(request, links, 15) return direct_to_template( request, template = 'links/link_list.html', extra_context = { 'links': links, }) The above view actually accomplishes what I need but in what I believe to be a horribly inefficient way. This causes the dreaded n+1 queries, as it stands that's 33 queries for a page containing just 29 links while originally I got away with just 4 queries. I would really prefer to do this using Django's ORM or at least .extra(). Any advice?

    Read the article

  • How can I modified the value of a string defined in a struc?

    - by Eric
    Hi, I have the following code in c++: define TAM 4000 define NUMPAGS 512 struct pagina { bitset<12 direccion; char operacion; char permiso; string *dato; int numero; }; void crearPagina(pagina* pag[], int pos, int dir) { pagina * paginas = (pagina*)malloc(sizeof(char) * TAM); paginas - direccion = bitset<12 (dir); paginas - operacion = 'n'; paginas - permiso = 'n'; string **tempDato = &paginas - dato; char *temp = " "; **tempDato = temp; paginas - numero = 0; pag[pos] = paginas; } I want to modify the value of the variable called "string *dato" in the struct pagina but, everytime I want to assing a new value, the compiler throws a segmentation fault. In this case I'm using a pointer to string, but I have also tried with a string. In a few words I want to do the following: pagina - dato = "test"; Any idea? Thanks in advance!!!

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • Dealing with array of IntPtr

    - by Padu Merloti
    I think I'm close and I bet the solution is something stupid. I have a C++ native DLL where I define the following function: DllExport bool __stdcall Open(const char* filePath, int *numFrames, void** data); { //creates the list of arrays here... don't worry, lifetime is managed somewhere else //foreach item of the list: { BYTE* pByte = GetArray(i); //here's where my problem lives *(data + i * sizeofarray) = pByte; } *numFrames = total number of items in the list return true; } Basically, given a file path, this function creates a list of byte arrays (BYTE*) and should return a list of pointers via the data param. Each pointing to a different byte array. I want to pass an array of IntPtr from C# and be able to marshal each individual array in order. Here's the code I'm using: [DllImport("mydll.dll",EntryPoint = "Open")] private static extern bool MyOpen( string filePath, out int numFrames, out IntPtr[] ptr); internal static bool Open( string filePath, out int numFrames, out Bitmap[] images) { var ptrList = new IntPtr[512]; MyOpen(filePath, out numFrames, out ptrList); images = new Bitmap[numFrames]; var len = 100; //for sake of simplicity for (int i=0; i<numFrames;i++) { var buffer = new byte[len]; Marshal.Copy(ptrList[i], buffer, 0, len); images[i] = CreateBitmapFromBuffer(buffer, height, width); } return true; } Problem is in my C++ code. When I assign *(data + i * sizeofarray) = pByte; it corrupts the array of pointers... what am I doing wrong?

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0]},{messages,[]},{links,[#Port<0.100,<0.17.0]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • Specify More "case" in switch parameters

    - by Sophie Mackeral
    I have the following code: $ErrorType = null; switch ($ErrNo){ case 256,1: $ErrorType = "Error"; break; case 512,2: $ErrorType = "Warning"; break; case 1024,8: $ErrorType = "Notice"; break; case 2048: $ErrorType = "Strict Warning"; break; case 8192: $ErrorType = "Depreciated"; break; } But the problem is, I'm going from the pre-defined constants for an error handling software solution.. I cannot specify more than one "case" for the dedicated error categories, example: switch ($ErrNo){ case 1: $ErrorType = "Error"; break; case 256: $ErrorType = "Error"; } That's a repeat of code.. Whereas with a solution like my first example, it would be beneficial as two integers fall under the same category.. Instead, i'm returned with the following: Parse error: syntax error, unexpected ',' in Action_Error.php on line 37

    Read the article

  • What is the best / proper idiom in django for modifying a field during a .save() where you need to o

    - by MDBGuy
    Hi, say I've got: class LogModel(models.Model): message = models.CharField(max_length=512) class Assignment(models.Model): someperson = models.ForeignKey(SomeOtherModel) def save(self, *args, **kwargs): super(Assignment, self).save() old_person = #????? LogModel(message="%s is no longer assigned to %s"%(old_person, self).save() LogModel(message="%s is now assigned to %s"%(self.someperson, self).save() My goal is to save to LogModel some messages about who Assignment was assigned to. Notice that I need to know the old, presave value of this field. I have seen code that suggests, before super().save(), retrieve the instance from the database via primary key and grab the old value from there. This could work, but is a bit messy. In addition, I plan to eventually split this code out of the .save() method via signals - namely pre_save() and post_save(). Trying to use the above logic (Retrieve from the db in pre_save, make the log entry in post_save) seemingly fails here, as pre_save and post_save are two seperate methods. Perhaps in pre_save I can retrieve the old value and stick it on the model as an attribute? I was wondering if there was a common idiom for this. Thanks.

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >