Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 93/701 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • What ways are there to edit a function in R?

    - by Tal Galili
    Let's say we have the following function: foo <- function(x) { line1 <- x line2 <- 0 line3 <- line1 + line2 return(line3) } And that we want to change the second line to be: line2 <- 2 How would you do that? One way is to use fix(foo) And change the function. Another way is to just write the function again. Is there another way? (Remember, the task was to change just the second line)

    Read the article

  • PHP - Use isset inside function not working..?

    - by pnichols
    I have a PHP script that when loaded, check first if it was loaded via a POST, if not if GET['id'] is a number. Now I know I could do this like this: if(isset($_GET['id']) AND isNum($_GET['id'])) { ... } function isNum($data) { $data = sanitize($data); if ( ctype_digit($data) ) { return true; } else { return false; } } But I would like to do it this way: if(isNum($_GET['id'])) { ... } function isNum($data) { if ( isset($data) ) { $data = sanitize($data); if ( ctype_digit($data) ) { return true; } else { return false; } } else { return false; } } When I try it this way, if $_GET['id'] isn't set, I get a warning of undefined index: id... It's like as soon as I put my $_GET['id'] within my function call, it sends a warning... Even though my function will check if that var is set or not... Is there another way to do what I want to do, or am I forced to always check isset then add my other requirements..??

    Read the article

  • Function callers in C?

    - by thyrgle
    So I was wondering how they work. To explain what I mean by a "function caller" a good example of what I mean would be glutTimerFunc, it is able to take in a function as a parameter and call it even with it not knowing it is declared. How is it doing this?

    Read the article

  • When parsing XML with jquery, how can we pass the current node from within each() to another functio

    - by Johusa
    Say we have an XML document with many book nodes... When parsing XML with jquery, how can i pass the current node from each() iteration to another function that will do some stuff until something is reached and then go back to the previous function (passing along the current node from this function back to the first function)? Here something more descriptive (this is just an example out of my head, not accurate): function MyParser(x1,x2,dom) { // if i am called by anotherFunction(thisNode) proceed from the passed node dom.find('book').each(function() { var Letter = thisNode.find(author).charAt(0); if(x1 == Letter) { // print everything till the next letter (x2) anotherFunction(thisNode) } } } function anotherFunction(x2,thisNode) { //continue parsing here until you reached x2 //when x2 is reached, return to previous function passing again the current node }

    Read the article

  • class T in c++ (your definition)

    - by JohnWong
    The one advantage of using class T in c++ is to reduce the time to redefine data types in a function, if those data types are defined in other function, for example, in int main. template <class T> void showabs(T number) { if (number < 0 ) number = -number; cout << number << endl; return 0; } int main() { int num1 = -4; float num2 = -4.23f; showabs(num1); showabs(num2); return 0; } So in this case, without class T, for each data type, we have to add its corresponding data-type condition, that is, another set of if statement for int, and another one for float. Am I correct?

    Read the article

  • Can I use a method as a lambda?

    - by NewAlexandria
    I have an interface the defines a group of conditions. it is one of several such interfaces that will live with other models. These conditions will be called by a message queue handler to determine completeness of an alert. All the alert calls will be the same, and so I seek to DRY up the enqueue calls a bit, by abstracting the the conditions into their own methods (i question if methods is the right technique). I think that by doing this I will be able to test each of these conditions. class Loan module AlertTriggers def self.included(base) base.extend LifecycleScopeEnqueues # this isn't right Loan::AlertTriggers::LifecycleScopeEnqueues.instance_method.each do |cond| class << self def self.cond ::AlertHandler.enqueue_alerts( {:trigger => Loan.new}, cond ) end end end end end module LifecycleScopeEnqueues def student_awaiting_cosigner lambda { |interval, send_limit, excluding| excluding ||= '' Loan.awaiting_cosigner. where('loans.id not in (?)', excluding.map(&:id) ). joins(:petitions). where('petitions.updated_at > ?', interval.days.ago). where('petitions.updated_at <= ?', send_limit.days.ago) } end end I've considered alternatives, where each of these methods act like a scope. Down that road, I'm not sure how to have AlertHandler be the source of interval, send_limit, and excluding, which it passes to the block/proc when calling it.

    Read the article

  • Count, inner join

    - by Urosh
    I have two tables: DRIVER (Driver_Id,First name,Last name,...) PARTICIPANT IN CAR ACCIDENT (Participant_Id,Driver_Id-foreign key,responsibility-yes or no,...) Now, I need to find out which driver participated in accident where responsibility is 'YES', and how many times. I did this: Select Driver_ID, COUNT (Participant.Driver_ID)as 'Number of accidents' from Participant in car accident where responsibility='YES' group by Driver_ID order by COUNT (Participant.Driver_ID) desc But, I need to add drivers first and last name from the first table(using inner join, I suppose). I don't know how, because it is not contained in either an aggregate function or the GROUP BY clause. Please help :)

    Read the article

  • What are the useful UNIX functions that MS doesn't implement? And why? [closed]

    - by prosseek
    When programming with Python, I came across some functions that are not implemented on Windows. os.fork() may be one of them. UNIX came before WinNT, so the WinNT developers (most notably Dave Cutler) must knew about the features and functions of the UNIX. But, to me, it seems that MS didn't like UNIX so much that they mistakenly/intentionally skipped or distorted some of the useful UNIX functions/features; i.e. /abc/def in UNIX, \abc\def in Windows as an easy example. And when I read the Windows System Programming book, I felt uncomfortable as the Windows system functions seem nothing more than a tweak from UNIX. (I might be wrong.) What are those functions/features that MS OSes don't have, but UNIX origninated? Is there any reason for this? Do they just want to differentiate from UNIX world? Or do they think some of the UNIX functions are unnecessary? Is Windows a tweak from UNIX? Or, is there any great OS features that were invented in MS to make Windows better than UNIX?

    Read the article

  • Python - calculate multinomial probability density functions on large dataset?

    - by Seafoid
    Hi, I originally intended to use MATLAB to tackle this problem but the inbuilt functions has limitations that do not suit my goal. The same limitation occurs in NumPy. I have two tab-delimited files. The first is a file showing amino acid residue, frequency and count for an in-house database of protein structures, i.e. A 0.25 1 S 0.25 1 T 0.25 1 P 0.25 1 The second file consists of quadruplets of amino acids and the number of times they occur, i.e. ASTP 1 Note, there are 8,000 such quadruplets. Based on the background frequency of occurence of each amino acid and the count of quadruplets, I aim to calculate the multinomial probability density function for each quadruplet and subsequently use it as the expected value in a maximum likelihood calculation. The multinomial distribution is as follows: f(x|n, p) = n!/(x1!*x2!*...*xk!)*((p1^x1)*(p2^x2)*...*(pk^xk)) where x is the number of each of k outcomes in n trials with fixed probabilities p. n is 4 four in all cases in my calculation. I have created three functions to calculate this distribution. # functions for multinomial distribution def expected_quadruplets(x, y): expected = x*y return expected # calculates the probabilities of occurence raised to the number of occurrences def prod_prob(p1, a, p2, b, p3, c, p4, d): prob_prod = (pow(p1, a))*(pow(p2, b))*(pow(p3, c))*(pow(p4, d)) return prob_prod # factorial() and multinomial_coefficient() work in tandem to calculate C, the multinomial coefficient def factorial(n): if n <= 1: return 1 return n*factorial(n-1) def multinomial_coefficient(a, b, c, d): n = 24.0 multi_coeff = (n/(factorial(a) * factorial(b) * factorial(c) * factorial(d))) return multi_coeff The problem is how best to structure the data in order to tackle the calculation most efficiently, in a manner that I can read (you guys write some cryptic code :-)) and that will not create an overflow or runtime error. To data my data is represented as nested lists. amino_acids = [['A', '0.25', '1'], ['S', '0.25', '1'], ['T', '0.25', '1'], ['P', '0.25', '1']] quadruplets = [['ASTP', '1']] I initially intended calling these functions within a nested for loop but this resulted in runtime errors or overfloe errors. I know that I can reset the recursion limit but I would rather do this more elegantly. I had the following: for i in quadruplets: quad = i[0].split(' ') for j in amino_acids: for k in quadruplets: for v in k: if j[0] == v: multinomial_coefficient(int(j[2]), int(j[2]), int(j[2]), int(j[2])) I haven'te really gotten to how to incorporate the other functions yet. I think that my current nested list arrangement is sub optimal. I wish to compare the each letter within the string 'ASTP' with the first component of each sub list in amino_acids. Where a match exists, I wish to pass the appropriate numeric values to the functions using indices. Is their a better way? Can I append the appropriate numbers for each amino acid and quadruplet to a temporary data structure within a loop, pass this to the functions and clear it for the next iteration? Thanks, S :-)

    Read the article

  • How to include a PHP generated XML file into flash vars, while ALSO passing through the current php functions into it?

    - by Sam
    Hello Given situation: In webpage.php the flashscript is calling a flash script with a flashvar: the playlist file which is a PHP generated XML file: playlist.php, it does that well so long as there are no extra functions in there. Now, in that XML-format playlistfile there needs to be a special function, besides the usual echo("");, namely the very special echo __(""); function that is already declared in webpage.php which needs to do something with the paragraphs residing within that xml file. However, currently the retrieved file misses the function echo __();and says "no such function declared in that xml-format [playlist.php] file". The php functions that are currently included at the very top of webpage.php somehow do not pass-through-the necessary functions into the playlist file for it to recognise how to handle it, in order for that playlist to get those necessary functions working. Apparently these are not passed through automatically/properly when residing in the flashvars?? Cause the echo __(""); works fine when called within webpage.php or via a normal php include(""); if those functions are in a different php file. But not working from the playlist.php file. Any ideas why/what is going on here? I appreciate your clues for this prob +1. Thanks very much. WEBPAGE.PHP the webpage, has at the top an include with functions: <?php include (functions.php); ?> // function that know what to do with echo __("paragraph") <script language="JavaScript" type="text/javascript"> run( 'play', 'true', 'loop', 'true', 'flashvars', 'xmlFile=/incl/playlist.php', // <<<< !! 'wmode', 'transparent', 'allowScriptAccess','sameDomain', ); </script> <noscript> <object classid="blabla"> <param name="allowScriptAccess" value="sameDomain" /> <param name="movie" value="/movies/movie.swf" /> <param name="flashvars" value="xmlFile=/incl/playlist.php" /> // <<< !! <embed src="/movies/movies.swf" type="application/x-shockwave-flash"/> </object> </noscript> PLAYLIST.PHP The PHP generated XML file which is retrieved into the webpage as flash variable (see above) <?php echo ('<?xml version="1.0" encoding="UTF-8"?>'); echo ('<songs>'); echo ('<song version="1. "') . __("boom blue blow bell bowl") . ('/>'); echo ('<song version="2. "') . __("ball bail beam bike base") . ('/>'); echo ('</songs>'); ?>

    Read the article

  • How is dependency inversion related to higher order functions?

    - by Gulshan
    Today I've just seen this article which described the relevance of SOLID principle in F# development- F# and Design principles – SOLID And while addressing the last one - "Dependency inversion principle", the author said: From a functional point of view, these containers and injection concepts can be solved with a simple higher order function, or hole-in-the-middle type pattern which are built right into the language. But he didn't explain it further. So, my question is, how is the dependency inversion related to higher order functions?

    Read the article

  • How to use javascript functions, defined in some external *.js file in browser's javascript console?

    - by Dmytro Tsiniavsky
    I would like to know is it possible to save some, for example,simplemath.js file with function ADD(a, b) { return a + b; } simple function, run opera's or some other browser's javascript console, include somehow this (simplemath.js) file, call ADD(2, 5), and get a result in console or execute javascript code on current web page and manipulate with it's content. How can I do that? How can I use javascript functions from external files in web-browser's javascript console?

    Read the article

  • Why do programming languages allow shadowing/hiding of variables and functions?

    - by Simon
    Many of the most popular programming languges (such as C++, Java, Python etc.) have the concept of hiding / shadowing of variables or functions. When I've encountered hiding or shadowing they have been the cause of hard to find bugs and I've never seen a case where I found it necessary to use these features of the languages. To me it would seem better to disallow hiding and shadowing. Does anybody know of a good use of these concepts?

    Read the article

  • Using stored procedure to call multiple packages at the same time from SSIS Catalog (SSISDB.catalog.start_execution) resulted in deadlock

    - by Kevin Shyr
    Refer to my previous post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx) about dynamic package calling and multiple packages execution in these posts: I only saw this twice, other times the stored procedure was able to call the packages successfully.  After the service pack, I haven't seen it...yet. http://support.microsoft.com/kb/2699720

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • What are the functions of modern game publishers? [closed]

    - by ApoorvaJ
    According to the Wikipedia page on Video game publishers, they are responsible for "their product's manufacturing and marketing, including market research and all aspects of advertising." From what I've read, they also arrange for the development funding. In the following questions, I'm asking about AAA, indie and mobile publishers: Do today's publishers fulfill any other functions? Is there any good reading material on these topics?

    Read the article

  • Where are the Microsoft downloaded app compat updates stored?

    - by Ian Boyd
    Where are the Microsoft application compatibility update settings stored on a Windows XP, Windows Vista, and Windows 7 computer? Microsoft periodically release application compatibility updates (e.g. KB929427), where they list the shims that should be applied to a program in order to workaround known bugs in the software. Where are these app compat flags stored, and how can i see what shims are being applied? i have a feeling that a recent app compat update included a flag to force a particular piece of software, that we use, to require administrator. Because the task is scheduled to run nightly, and the running user does not have administrative privelages, the task is failing to start. The application is requiring to be elevated. It has the UAC shield overlay. The application has no RT_MANIFEST resource, and the compatibility option Run this program as administrator is disabled (per-user and all users). So all that's left is some secret global setting. i know user-specified compat flags are stored in: HKEY_LOCAL_MACHINE \SOFTWARE \Microsoft \Windows NT \CurrentVersion \AppCompatFlags \Layers

    Read the article

  • For a Javascript library, what is the best or standard way to support extensibility

    - by Michael Best
    Specifically, I want to support "plugins" that modify the behavior of parts of the library. I couldn't find much information on the web about this subject. But here are my ideas for how a library could be extensible. The library exports an object with both public and "protected" functions. A plugin can replace any of those functions, thus modifying the library's behavior. Advantages of this method are that it's simple and that the plugin's functions can have full access to the library's "protected" functions. Disadvantages are that the library may be harder to maintain with a larger set of exposed functions and it could be hard to debug if multiple plugins are involved (how to know which plugin modified which function?). The library provides an "add plugin" function that accepts an object with a specific interface. Internally, the library will use the plugin instead of it's own code if appropriate. With this method, the internals of the library can be rearranged more freely as long as it still supports the same plugin interface. This could also support having different plugin interfaces to modify different parts of the library. A disadvantage of this method is that the plugins may have to re-implement code that is already part of the library since the library's internal functions are not exported. The library provides a "set implementation" function that accepts an object inherited from a specific base object. The library's public API calls functions in the implementation object for any functionality that can be modified and the base implementation object includes the core functionality, with both external (to the API) and internal functions. A plugin creates a new implementation object, which inherits from the base object and replaces any functions it wants to modify. This combines advantages and disadvantages of both the other methods.

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >