Search Results

Search found 8391 results on 336 pages for 'partial hash arguments'.

Page 67/336 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Windows is not passing command line arguments to Python programs executed from the shell.

    - by mckoss
    I'm having trouble getting command line arguments passed to Python programs if I try to execute them directly as executable commands from a Windows command shell. For example, if I have this program (test.py): import sys print "Args: %r" % sys.argv[1:] And execute: >test foo Args: [] as compared to: >python test.py foo Args: ['foo'] My configuration has: PATH=...;C:\python25;... PATHEXT=...;.PY;.... >assoc .py .py=Python.File >ftype | grep Python Python.CompiledFile="C:\Python25\python.exe" "%1" %* Python.File="C:\Python25\python.exe" "%1" %* Python.NoConFile="C:\Python25\pythonw.exe" "%1" %*

    Read the article

  • Does the java JFC hash table use seperate chaining resolution? Can I traverse each list in the table

    - by Matt
    I have written a program to store a bunch of strings in a JFC hash table. There are defiantly collisions going on, but I don't really know how it is handling them. My ultimate goal is to print the number of occurrences of each string in the table, and traversing the bucket or list would work nicely. Or maybe counting the collisions? Or do you have another idea of how I could get a count of the elements?

    Read the article

  • I get the warning "Format not a string literal and no format arguments" at NSLog -- how can I correc

    - by dsobol
    Hello, I get the warning "Format not a string literal and no format arguments" on the NSLog call in the following block: (void) alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex { NSLog([NSString stringWithFormat:@"%d", buttonIndex]); } I have read in another post here that this error message indicates an insecure use of NSLog. Could someone point me in the direction of a properly formatted string for this? Thanks for any and all assistance! Regards, Steve O'Sullivan

    Read the article

  • Simple prolog program. Getting error: >/2: Arguments are not sufficiently instantiated

    - by user1279812
    I made a prolog program posAt(List1,P,List2) that tests whether the element at position P of List1 and List2 are equal: posAt([X|Z],1,[Y|W]) :- X=Y. posAt([Z|X],K,[W|Y]) :- K1, Kr is K - 1, posAt(X,Kr,Y). When testing: ?- posAt([1,2,3],X,[a,2,b]). I expected an output of X=2 but instead I got the following error: ERROR: /2: Arguments are not sufficiently instantiated Why am I getting this error?

    Read the article

  • PHP get all function arguments as $key => $value array?

    - by web lover
    <?php function register_template(){ print_r(func_get_args()); # the result was an array ( [0] => my template [1] => screenshot.png [2] => nice template .. ) } register_template( # unkown number of arguments $name = "my template", $screenshot = "screenshot.png", $description = "nice template .. " ) ?> BUT , I want the result array as $key = $value form , $key represents the parameter name.

    Read the article

  • Look up and string operation; fetch a value based on searching a partial string

    - by Sam
    I have 2 sets of data. One set to be filled up by fetching relevant data from a data array. DATA to be FILLED: Part#1 Part#2 ------ ------- 4021006 3808587 3870480 3083410 3873905 3890030 4002065 3699803 3930218 ARRAY OF DATA: Part#1 Part#2 ------ ------- 4021006;3808587 1 3808587 2 3870480;3083410;4002065 3 3083410 34 3873905 54 3890030 32 4002065;3930218 65 3699803 75 3930218 68 I need to match Part#1 and find Part#2. EXPECTED OUTPUT Part#1 Part#2 ------ ------- 4021006 1 3808587 1;2 3870480 3 3083410 3;4 3873905 54 3890030 32 4002065 3;65 3699803 75 3930218 65;68 Can anyone help.

    Read the article

  • How to dynamically reference a "partial template" in MS Word?

    - by scunliffe
    I want to make use of an "external reference" in Word. (for anyone that knows AutoCAD, I want XREF abilities in Word) Essentially I have a custom "header" that I want included in a whole pile of documents... that all reference a single file... such that if my address, logo, tagline, phone, fax or email changes, I update the one file, and all of the other 101 files that use it automatically update when I next open/use them. I'm using Office 2007 if that makes any difference.

    Read the article

  • FBReader: How do I get it to display partial images?

    - by someguy
    I've been using FBReader so far for reading e-books, but my problem with it is how it handles images. Whenever there's an image, it isn't displayed until all of it can be displayed (i.e. it won't show if you haven't scrolled down far enough) and scrolling isn't smooth because the image is treated as one line. Is there a way to change this so that it displays the partially visible images? If there is not a way to change this, I am open to software suggestions to use as an alternative (I'm using Linux). I have already tried Calibre, but I find that it's too bloated. It takes ages to start up, and I don't want a directory for all my e-books...

    Read the article

  • Reload jQuery when returning partial view from a controller?

    - by mattruma
    I am making the following call in my web page: <div id="comments"> <fieldset> <h4> Post your comment</h4> <% using (this.Ajax.BeginForm("CreateStoryComment", "Story", new { id = story.StoryId }, new AjaxOptions { UpdateTargetId = "comments", OnSuccess = "OnStoryCommentAdded" })) { %> <%= this.Html.TextArea("Body", string.Empty)%> <input type="submit" value="Add Comment" /> <% } %> </fieldset> </div> There's other code, but that's the gist of it. The controller returns a partial view that "refreshes" everying in the comments div. My problem is that the following jQuery is not being applied: $(".comment .delete").click(function () { if (confirm("Are you sure you want to delete this record?") == true) { $.post(this.href); $(this).parents(".comment").fadeOut("normal"); } return false; }); I'm assuming it's not being attached because the jQuery loads after the inital page load. If my assumption is correct, how do I get this jQuery to "refresh". Hopefully that makes some sense! :)

    Read the article

  • How to deal with "partial" dates (2010-00-00) from MySQL in Django?

    - by Etienne
    In one of my Django projects that use MySQL as the database, I need to have a date fields that accept also "partial" dates like only year (YYYY) and year and month (YYYY-MM) plus normal date (YYYY-MM-DD). The date field in MySQL can deal with that by accepting 00 for the month and the day. So 2010-00-00 is valid in MySQL and it represent 2010. Same thing for 2010-05-00 that represent May 2010. So I started to create a PartialDateField to support this feature. But I hit a wall because, by default, and Django use the default, MySQLdb, the python driver to MySQL, return a datetime.date object for a date field AND datetime.date() support only real date. So it's possible to modify the converter for the date field used by MySQLdb and return only a string in this format 'YYYY-MM-DD'. Unfortunately the converter use by MySQLdb is set at the connection level so it's use for all MySQL date fields. But Django DateField rely on the fact that the database return a datetime.date object, so if I change the converter to return a string, Django is not happy at all. Someone have an idea or advice to solve this problem? How to create a PartialDateField in Django ? EDIT Also I should add that I already thought of 2 solutions, create 3 integer fields for year, month and day (as mention by Alison R.) or use a varchar field to keep date as string in this format YYYY-MM-DD. But in both solutions, if I'm not wrong, I will loose the special properties of a date field like doing query of this kind on them: Get all entries after this date. I can probably re-implement this functionality on the client side but that will not be a valid solution in my case because the database can be query from other systems (mysql client, MS Access, etc.)

    Read the article

  • Possible to rank partial matches in Postgres full text search?

    - by Joe
    I'm trying to calculate a ts_rank for a full-text match where some of the terms in the query may not be in the ts_vector against which it is being matched. I would like the rank to be higher in a match where more words match. Seems pretty simple? Because not all of the terms have to match, I have to | the operands, to give a query such as to_tsquery('one|two|three') (if it was &, all would have to match). The problem is, the rank value seems to be the same no matter how many words match. In other words, it's maxing rather than multiplying the clauses. select ts_rank('one two three'::tsvector, to_tsquery('one')); gives 0.0607927. select ts_rank('one two three'::tsvector, to_tsquery('one|two|three|four')); gives the expected lower value of 0.0455945 because 'four' is not the vector. But select ts_rank('one two three'::tsvector, to_tsquery('one|two')); gives 0.0607927 and likewise select ts_rank('one two three'::tsvector, to_tsquery('one|two|three')); gives 0.0607927 I would like the result of ts_rank to be higher if more terms match. Possible? To counter one possible response: I cannot calculate all possible subsequences of the search query as intersections and then union them all in a query because I am going to be working with large queries. I'm sure there are plenty of arguments against this anyway! Edit: I'm aware of ts_rank_cd but it does not solve the above problem.

    Read the article

  • Why is partial specialziation of a nested class template allowed, while complete isn't?

    - by drhirsch
    template<int x> struct A { template<int y> struct B {};. template<int y, int unused> struct C {}; }; template<int x> template<> struct A<x>::B<x> {}; // error: enclosing class templates are not explicitly specialized template<int x> template<int unused> struct A<x>::C<x, unused> {}; // ok So why is the explicit specialization of a inner, nested class (or function) not allowed, if the outer class isn't specialiced too? Strange enough, I can work around this behaviour if I only partially specialize the inner class with simply adding a dummy template parameter. Makes things uglier and more complex, but it works. Note: I need this feature for recursive templates of the inner class for a set of the outer class. To make things even more complicate, in reality I only need a template function instead of the inner class. But partial specialization of functions is generally disallowed somewhere else in the standard ^^

    Read the article

  • Overlapping template partial specialization when wanting an "override" case: how to avoid the error?

    - by user173342
    I'm dealing with a pretty simple template struct that has an enum value set by whether its 2 template parameters are the same type or not. template<typename T, typename U> struct is_same { enum { value = 0 }; }; template<typename T> struct is_same<T, T> { enum { value = 1 }; }; This is part of a library (Eigen), so I can't alter this design without breaking it. When value == 0, a static assert aborts compilation. So I have a special numerical templated class SpecialCase that can do ops with different specializations of itself. So I set up an override like this: template<typename T> struct SpecialCase { ... }; template<typename LT, typename RT> struct is_same<SpecialCase<LT>, SpecialCase<RT>> { enum { value = 1 }; }; However, this throws the error: more than one partial specialization matches the template argument list Now, I understand why. It's the case where LT == RT, which steps on the toes of is_same<T, T>. What I don't know is how to keep my SpecialCase override and get rid of the error. Is there a trick to get around this? edit: To clarify, I need all cases where LT != RT to also be considered the same (have value 1). Not just LT == RT.

    Read the article

  • How to do partial page refresh using struts2-jquery plugin in struts2?

    - by user1703710
    I want to do partial page refresh with the help of this. Take a scenario, we have a dropdown list according to select option of it, I want to refresh a div section of a page with data populated according to dropdown selection . How to do this? i have tried this: JSP Code: On this Dropdown selection i want to populate (refresh) div. <s:form id="RoleListForm"> <s:label value="Roles"/> <s:url id="fetchJsonRoleListUrl" action="fetchJsonRoleList" namespace="/RolesPrivilegesJson"/> <sj:select name="idRoleInfo" id="idRoleInfoList" href="%{fetchJsonRoleListUrl}" list="roleNameList" onChangeTopics="reloadRolePrivilegesDiv" listKey="idRoleInfo" listValue="roleName" emptyOption="true"/> </s:form> Here is the div code that i want to populate according to DD selection: <s:url id="roleDetailsUrl" action="roleDetailsAction" /> <sj:div href="%{roleDetailsUrl}" formIds="RoleListForm" reloadTopics="reloadRolePrivilegesDiv"> <s:textfield id="idRoleName" name="roleName" /> <s:textfield id="idRolePrivileges" name="privileges"/> </sj:div>

    Read the article

  • how can I cache a partial retrieved through link_to_remote in rails?

    - by Angela
    I use link_to_remote to pull the dynamic contents of a partial onto the page. It's basically a long row from a database of "todo's" for the day. So as the person goes through and checks them off, I want to show the updated list with a line through those that are finished. Currently, I end up needing to click on the link_to_remote again after an item is ticked off, but would like it to redirect back to the "cached" page of to-do's but with the one item lined through. How do I do that? Here is the main view: <% @campaigns.each do |campaign| %> <!--link_to_remote(name, options = {}, html_options = nil)--> <tr><td> <%= link_to_remote(campaign.name, :update => "campaign_todo", :url => {:action => "campaign_todo", :id => campaign.id } ) %> </td></tr> <% end %> </table> <div id="campaign_todo"> </div> I'd like when the New/Create actions are done to go back to the page that redirected it there.

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >