Search Results

Search found 8391 results on 336 pages for 'partial hash arguments'.

Page 228/336 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Own params to PeriodicTask run() method in Celery

    - by Alex Isayko
    Hello to all! I am writing a small Django application and I should be able to create for each model object its periodical task which will be executed with a certain interval. I'm use for this a Celery application, but i can't understand one thing: class ProcessQueryTask(PeriodicTask): run_every = timedelta(minutes=1) def run(self, query_task_pk, **kwargs): logging.info('Process celery task for QueryTask %d' % query_task_pk) task = QueryTask.objects.get(pk=query_task_pk) task.exec_task() return True Then i'm do following: >>> from tasks.tasks import ProcessQueryTask >>> result1 = ProcessQueryTask.delay(query_task_pk=1) >>> result2 = ProcessQueryTask.delay(query_task_pk=2) First call is success, but other periodical calls returning the error - TypeError: run() takes exactly 2 non-keyword arguments (1 given) in celeryd server. So, can i pass own params to PeriodicTask run() ? Thanks!

    Read the article

  • How to pass a file (read from Java) most effectively to a native method?

    - by soc
    Hi, I have approx. 30000 files (1MB each) which I want to put into a native method, which requires just an byte array and the size of it as arguments. I looked through some examples and benchmarks (like http://nadeausoftware.com/articles/2008/02/java_tip_how_read_files_quickly) but all of them do some other fancy things. Basically I don't care about the contents of the file, I don't want to access something in that file or the byte array or do anything else with it. I just want to put a file into a native method which accepts an byte array as fast as possible. At the moment I'm using RandomAccessFile, but that's horribly slow (10MB/s). Is there anything like byte[] readTheWholeFile(File file){ ... } which I could put into native void fancyCMethod(readTheWholeFile(myFile), myFile.length()) What would you suggest?

    Read the article

  • Upsides of a timebox for a customer

    - by Ivo
    So I have a customer with a potential big project that (ofcourse) does not know what they want exactly. The size of this project can be more that 4 or 5 months so that is a big risk. Thats why I want to sell a timebox. For me that takes away the risk of spending 10 months instead of 5 for the same price. The problem is that I can't comeup with good arguments to convince the customer that a timebox is better for them. Any suggestions? How do you people handle this/

    Read the article

  • Class library modification / migration

    - by Clint
    I have 3 class libraries. A BBL, a DAL, and a DATA (about 15 datasets). Currently 4 [major] applications utilize the functionality in these DLL's. I'm rewriting one of those applications and I need to (1) Use some of the existing functionality in the libraries (2) Change some of it (3) Add new functionality (4) Add new datasets. I'm back and forth about the best way to do this, while keeping my risks at a minimum. Some thoughts.. 1) Use the existing projects and don't make any modifications, only additions 2) Make new libraries, bring over the code I can use, and make additions as needed 3) Implement partial classes in the existing projects Eventually all 4 applications will use the newest functionality, but it will be a slow migration; so the old code can't be depricated yet. Any thoughts?

    Read the article

  • maching strings

    - by kiran
    Write two functions, called countSubStringMatch and countSubStringMatchRecursive that take two arguments, a key string and a target string. These functions iteratively and recursively count the number of instances of the key in the target string. You should complete definitions for def countSubStringMatch(target,key): and def countSubStringMatchRecursive (target, key): For the remaining problems, we are going to explore other substring matching ideas. These problems can be solved with either an iterative function or a recursive one. You are welcome to use either approach, though you may find iterative approaches more intuitive in these cases of matching linear structures.

    Read the article

  • Programmatic binding of accelerators in wxPython

    - by Inductiveload
    I am trying to programmatically create and bind a table of accelerators in wxPython in a loop so that I don't need to worry about getting and assigning new IDs to each accelerators (and with a view to inhaling the handler list from some external resource, rather than hard-coding them). I also pass in some arguments to the handler via a lambda since a lot of my handlers will be the same but with different parameters (move, zoom, etc). The class is subclassed from wx.Frame and setup_accelerators() is called during initialisation. def setup_accelerators(self): bindings = [ (wx.ACCEL_CTRL, wx.WXK_UP, self.on_move, 'up'), (wx.ACCEL_CTRL, wx.WXK_DOWN, self.on_move, 'down'), (wx.ACCEL_CTRL, wx.WXK_LEFT, self.on_move, 'left'), (wx.ACCEL_CTRL, wx.WXK_RIGHT, self.on_move, 'right'), ] accelEntries = [] for binding in bindings: eventId = wx.NewId() accelEntries.append( (binding[0], binding[1], eventId) ) self.Bind(wx.EVT_MENU, lambda event: binding[2](event, binding[3]), id=eventId) accelTable = wx.AcceleratorTable(accelEntries) self.SetAcceleratorTable(accelTable) def on_move(self, e, direction): print direction However, this appears to bind all the accelerators to the last entry, so that Ctrl+Up prints "right", as do all the other three. How to correctly bind multiple handlers in this way?

    Read the article

  • Go, AppEngine: How to structure templates for application

    - by laslowh
    How are people handling the use of templates in their Go-based AppEngine applications? Specifically, I'm looking for a project structure that affords the following: Hierarchical (directory) structure of templates and partial templates Allow me to use HTML tools/editors on my templates (embedding template text in xxx.go files makes this difficult) Automatic reload of template text when on dev server Potential stumbling blocks are: template.ParseGlob() will not traverse recursively. For performance reasons it has been recommended not to upload your templates as raw text files (because those text files reside on different servers than executing code). Please note that I am not looking for a tutorial/examples of the use of the template package. This is more of an app structure question. That being said, if you have code that solves the above problems, I would love to see it. Thanks in advance.

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • How can I ensure that nested transactions are committed independently of each other?

    - by Caldera
    If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others? In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)? I have a stored procedure defined something like this in SQL Server 2000: CREATE PROCEDURE toplevel_proc .. AS BEGIN ... while @row_count <= @max_rows begin select @parameter ... where rownum = @row_count exec nested_proc @parameter select @row_count = @row_count + 1 end END

    Read the article

  • Is there a Javadoc-like plugin for Xcode that automatically generates the doc template?

    - by Mark
    I'm aware of Doxygen to generate the documentation. What I'm looking for is quick way to insert documentation in Xcode similar to what Eclipse does when editing Java files. Let's say I have an objective-c method with a couple of arguments like this: -(NSInteger*) sumOf: (NSInteger*) one and:(NSInteger*) two {... In Eclipse, if you place the cursor above the method and type: /**<Enter> you get a Javadoc template pre-populated with @param and @return tags. Is it possible to achieve something similar in Xcode? After typing /**<Enter>, I'd like to get this automatically: /** * * @param one * @param two * * @return */ -(NSInteger*) sumOf: (NSInteger*) one and:(NSInteger*) two {...

    Read the article

  • select GUI on windows (wxPy vs pyQt)

    - by Golovko
    Hello! We are plan to create an application for monitoring and configuring our service (which is running on remote server). After long time discuss, we decide for python as pl for our app, because we love and know python (better, than english, really). but we don't know, what GUI toolkit preffered for our aims. We need fast (for development and running) app, which users are admins, mainteners and account managers. There is two GUI toolkit for python, which we know: wxPython and pyQT. Anybody have arguments pro et contra candidat? And maybe peoples know commercial applications, running in this products (only python version of toolkits)? Links are desirable. Thanks, and excuse my english.

    Read the article

  • C#: Get key and value types of non-generic IDictionary at runtime

    - by Yang Zou
    there. I am wondering how I can get the key and value types of a non-generic IDictionary at runtime. For generic IDictionary, we can use reflection to get the generic arguments, which has been answered here. But for non-generic IDictionary, for instance, HybridDictionary, how can I get the key and value types? Thanks. Edit: I may not describe my problem properly. For non-generic IDictionary, if I have HyBridDictionary, which is declared as HyBridDictionary dict = new HyBridDictionary(); dict.Add("foo" , 1); dict.Add("bar", 2); How can I find out the type of the key is string and type of the value is int? Did I make the question clear? Thanks.

    Read the article

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • New records added to DataGridView aren't displayed

    - by Ross
    I have a custom Order class, groups of which are stored in List<Order> and a DataGridView. I think the problem is in my implementation so here's how I'm using it: In the form enclosing DataGridView (as OrdersDataGrid): public partial class MainForm : Form { public static List<Order> Orders; public MainForm() { // code to populate Orders with values, otherwise sets Orders to new List<Order>(); OrdersDataGrid.DataSource = Orders; } Then in another form that adds an Order: // Save event public void Save(object sender, EventArgs e) { Order order = BuildOrder(); // method that constructs an order object from form data MainForm.Orders.Add(order); } From what I can tell from the console this is added successfully. I thought the DataGrid would be updated automatically after this since Orders has changed - is there something I'm missing? The DataGrid accepts the class since it generates columns from the members.

    Read the article

  • template specilization using member enums

    - by Altan
    struct Bar { enum { Special = 4 }; }; template<class T, int K> struct Foo {}; template<class T> struct Foo<T::Special> {}; Usage: Foo<Bar> aa; fails to compile using gcc 4.1.2 It complains about the usage of T::Special for partial specilization of Foo. If Special was a class the solution would be to a typename in front of it. Is there something equivalent to it for enums (or integers)? Thanks, Altan

    Read the article

  • I built my rails app with sqlite and without specifying any db field sizes, Is my app now foobared for production?

    - by Tim Santeford
    I've been following a lot of good tutorials on building rails apps but I seem to be missing the whole specifying and validating db field sizes part. I love not needing to have to think about it when roughing out an app (I would have never done this with a PHP or ASP.net app). However, now that I'm ready to go to production, I think I might have done myself a disservice by not specifying field sizes as I went. My production db will be MySQL. What is the best practice here? Do I need to go through all of my migration files and specify sizes, update all the models with validation, and update all my form partial views with input max widths? or am I missing a critical step in my development process?

    Read the article

  • Multiple Inheritance Debates II: according to Stroustrup

    - by asksuperuser
    I know very well about the traditional arguments about why Interface Inheritance is prefered to multiple inheritance, there has been already a post here : http://stackoverflow.com/questions/191691/should-c-include-multiple-inheritance But according to Stroustrup the real reason why Microsoft and Sun decided to get rid off multiple inheritance is that they have vested interest to do so: instead of putting features in the languages, they put in frameworks so that people then become tied to their platform instead of people having the same capability at a language standard level. What do you think ? Why Sun and Microsoft consider developers too immature to just make the choice themselves ?

    Read the article

  • Securing database keys for client-side processing

    - by danp
    I have a tree of information which is sent to the client in a JSON object. In that object, I don't want to have raw IDs which are coming from the database. I thought of making a hash of the id and a field in the object (title, for example) or a salt, but I'm worried that this might have a serious effect on processing overhead. SELECT * FROM `things` where md5(concat(id,'some salt')) = md5('1some salt'); Is there a standard practice for obscuring IDs in this kind of situation?

    Read the article

  • How are files (especially audio files) organized internally?

    - by mystify
    I try to grok that: Apple is talking about "packets" in audio files, and there is a fancy function called AudioFileReadPackets which takes a lot of arguments. One of them specifies the "start packet", and another one the number of packets which you want to read. So I imagine an audio file to look like this, internally: It's made up of a lot of packets. If it's an audio file which has an variable bit rate format, then every packet may have a different size. If the file has an constant bit rate format, then every packet is the same size. So an audio file is like a truck full of boxes, and every box contains some interesting stuff. Is that correct? Does it apply to any kind of file? Is this how files actually look like?

    Read the article

  • Username correct, password incorrect?

    - by jonnnnnnnnnie
    In a login system, how can you tell if the user has entered the password incorrectly? Do you perform two SQL queries, one to find the username, and then one to find the username and matching (salted+hashed etc) password? I'm asking this because If the user entered the password incorrectly, I want to update the failed_login_attempts column I have. If you perform two queries wouldn't that increase overhead? If you did a query like this, how would you tell if the password entered was correct or not, or whether the username doesn't exist: SELECT * FROM author WHERE username = '$username' AND password = '$password' LIMIT 1 ( ^ NB: I'm keeping it simple, will use hash and salt, and will sanitize input in real one.) Something like this: $user = perform_Query() // get username and password? if ($user['username'] == $username && $user['password'] == $password) { return $user; } elseif($user['username'] == $username && $user['password'] !== $password) { // here the password doesn't match // update failed_login_attemps += 1 }

    Read the article

  • parallel.foreach with custom collection

    - by SchwartzE
    I am extending the System.Net.Mail.MailAddress class to include an ID field, so I created a new custom MailAddress class that inherited from the existing class and a new custom MailAddressCollection class. I then overrode the existing System.Net.Mail.MailMessage.To to use my new collection. I would like to process the recipients in parallel, but I can't get the syntax right. This is the syntax I am using. Parallel.ForEach(EmailMessage.To, (MailAddress address) => { emailService.InsertRecipient(emailId, address.DisplayName, address.Address, " "); }); I get the following errors: The best overloaded method match for 'System.Threading.Tasks.Parallel.ForEach(System.Collections.Generic.IEnumerable, System.Action)' has some invalid arguments Argument 1: cannot convert from 'EmailService.MailAddressCollection' to 'System.Collections.Generic.IEnumerable' What syntax do I need to use custom collections?

    Read the article

  • Call function based off of a string in Lisp

    - by powerj1984
    I am passing in command line arguments to my Lisp program and they are formatted like this when they hit my main function: ("1 1 1" "dot" "2 2 2") I have a dot function and would like to call it directly from the argument, but this isn't possible because something like (funcall (second args)...) receives "dot" and not dot as the function name. I tried variations of this function: (defun remove-quotes (s) (setf (aref s 0) '"")) to no avail, before realizing that the quotes were not really a part of the string. Is there a simple way to do this, or should I just check each string and then call the appropriate function? Thanks!

    Read the article

  • git - is there a way to get only required files in the working directory

    - by spoonboy
    I'm new to git and trying to use it with a project that has many (several hundreds) sources. The problem I have is that git is extracting all the project's sources to my working directory when doing checkout. This makes a lot of mess as I have to jump between the files and can unintentionally change/corrupt files that I wasn't even planning to change. I would prefer to extract only sources that I'm going to modify and then work with them. So, is there a way to tell git that I only going to work with specific sources, and so, that only these sources would be extracted to the working directory? Note, that this is not a partial checkout or something like this. I'm ok to checkout the whole branch. It's more about organising a working folder. Thanks.

    Read the article

  • Constructors + Dependency Injection

    - by Sunny
    If I am writing up a class with more than 1 constructor parameter like: class A{ public A(Dependency1 d1, Dependency2 d2, ...){} } I usually create a "argument holder"-type of class like: class AArgs{ public Dependency1 d1 { get; private set; } public Dependency2 d2 { get; private set; } ... } and then: class A{ public A(AArgs args){} } Typically, using a DI-container I can configure the constructor for dependencies & resolve them & so there is minimum impact when the constructors need to change. Is this considered an anti-pattern and/or any arguments against doing this?

    Read the article

  • Process requires redirected input

    - by initialZero
    I have a UNIX native executable that requires the arguments to be fed in like this prog.exe < foo.txt. foo.txt has two lines: bar baz I am using java.lang.ProcessBuilder to execute this command. Unfortunately, prog.exe will only work using the redirect from a file. Is there some way I can mimic this behavior in Java? Of course, ProcessBuilder pb = new ProcessBuilder("prog.exe", "bar", "baz"); does not work. Thanks!

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >