Search Results

Search found 751 results on 31 pages for 'clarification'.

Page 3/31 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Clarification needed about Python CSV file format parsing

    - by HH
    Format is like: CHINA;2002-06-25 00:00:00.000;5,60 CHINA;2002-06-26 00:00:00.000;5,32 CHINA;2002-06-27 00:00:00.000;5,31 and I try to use Python's CSV tools to parse it but cannot understand the paragraph, source: And while the module doesn’t directly support parsing strings, it can easily be done: import csv for row in csv.reader(['one,two,three']): print row Could someone clarify the line ['one,two,three']? How would you use it with format A;B;C?

    Read the article

  • Objective-C retain counts clarification

    - by Tom
    Hey, I kind of understand what's retain counts for. But not totally. I looked on google a lot to try to understand but still I don't. And now I'm in a bit of code (I'm doing iPhone development) that I think I should use them but don't know totally how. Could someone give me a quick and good example of how and why using them? Thanks!

    Read the article

  • "Othello" game needs some clarification

    - by pappu
    I am trying to see if my understanding of "othello" fame is correct or not. According to the rules, we flip the dark/light sides if we get some sequence like X000X = XXXXX. The question I have is if in the process of flipping 0-X or X- 0, do we also need to consider the rows/columns/diagonals of newly flipped elements? e.g. consider board state as shown in above image(New element X is placed @ 2,3) When we update board, we mark elements from 2,3 to 6,3 as Xs but in this process elements like horizontal 4,3 to 4,5 and diagonal 2,3 to 4,5 are also eligible for update? so do we update those elements as well? or just the elements which have starting as 2,3 (i.e update rows/column/diagonal whose starting point is the element we are dealing with, in our case 2,3?) Please help me understand it

    Read the article

  • Clarification/explanation of RegisterClientScriptInclude method

    - by mpminnich
    I've been looking on the Internet for a fairly clear explanation of the different methods of registering javascript in an asp.net application. I think I have a basic understating of the difference between registerStartupScript and registerClientScriptBlock (the main difference being where in the form the script is inserted). I'm not sure I understand what the RegisterClientScriptInclude method does or when it is used. From what I can gather, it is used to register an external .js file. Does this then make any and all javascript functions in that file available to the aspx page it was registered on? For example, if it was registered in the onLoad event of a master page, would all pages using that master page be able to use the javascript functions in the .js file? What problems would arise when trying to use document.getElementById in this case, if any? Also, when it is necessary/advantageous to use multiple .js files and register them separately? I appreciate any help you can give. If you know of any really good resources I can use to get a thorough understanding of this concept, I'd appreciate it!

    Read the article

  • New job clarification [closed]

    - by Fred
    Lets say you have decided to join company A. During interview you got feedback on what technology you would be working on(C# win app) and other details( sketchy).Now you have decided to join the company. Is it ok to ask via mail for further information and also ask to specify certain topics to brush up so that one can be better prepared for next job? Of course i know this question is not programming related.

    Read the article

  • String Object. Clarification needed

    - by mac
    Guys, help me clarify. Say i have the following line in my program: jobSetupErrors.append("abc"); In the case above where jobSetupErrors is a StringBuilder(), what i see happen is: New String Object is created and assigned value "abc" value of that String object is assigned to the existing StringBuilder object If that is correct, and I add 1 more line ... jobSetupErrors.append("abc"); logger.info("abc"); In the above example are we creating String object separately 2 times? If so, would it be more proper to do something like this? String a = "abc"; jobSetupErrors.append(a); logger.info(a); Is this a better approach? Please advise

    Read the article

  • ASP.net and WCF some clarification

    - by nettguy
    Recently I faced few interview questions.The interviewer asked the to give the detailed answer. 1)Can we override a WCF service (Its is not OOPS overriding) ?.Explain the reason on either end. (WCF Related). 2)Can we override Page events (Page_Load())?.Explain reason.(ASP.NET related). 3)What is the primary responsibility of Pre_Init( page) event ,apart from user preference setting,skinning? 4) Can we override Static methods.Explain the reason. can anyone help me to understand the reasons.Thanks in well advance.

    Read the article

  • Objective-C retain clarification

    - by Maverick
    I'm looking at this code: NSMutableArray *controllers = [[NSMutableArray alloc] init]; for (unsigned i = 0; i < kNumberOfPages; i++) { [controllers addObject:[NSNull null]]; } self.viewControllers = controllers; [controllers release]; Later on... - (void)dealloc { [viewControllers release]; ... } I see that self.viewControllers and controllers now point to the same allocated memory (of type NSMutableArray *), but when I call [controllers release] isn't self.viewControllers released as well, or is setting self.viewControllers = controllers automatically retains that memory?

    Read the article

  • Clarification of a some code

    - by Legend
    I have come across a website that appears to use Ajax but does not include any js file except one file called ajax.js which has the following: function run(c, f, b, a, d) { var e = null; if (b && f) { document.getElementById(b).innerHTML = f } if (window.XMLHttpRequest) { e = new XMLHttpRequest() } else { if (window.ActiveXObject) { e = new ActiveXObject(Microsoft.XMLHTTP) } } e.onreadystatechange = function () { if (e.readyState == 4) { if (e.status == 200 || e.statusText == "OK") { if (b) { document.getElementById(b).innerHTML = e.responseText } if (a) { setTimeout(a, 0) } } else { console.log("AJAX Error: " + e.status + " | " + e.statusText); if (b && d != 1) { document.getElementById(b).innerHTML = "AJAX Error. Please try refreshing." } } } }; e.open("GET", c, true); e.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); e.send(null) } Like you might have guessed, the way it issues queries inside the page with queries like this: run('page.php',loadingText,'ajax-test', 'LoadSamples()'); I must admit that this is the first time I've seen a page from which I could not figure how things are being done. I have a few questions: Is this Server-Side Ajax or something similar? If not, can someone clarify what exactly is this? Why does one use this? Is it for hiding the design details? (which are otherwise revealed in plain text by javascript) How difficult would it be to convert my existing application into this design pattern? (maybe a subjective question but any short suggestion will do) Any suggestions?

    Read the article

  • Clarification For Dynamic Height Boxes for CSS

    - by HollerTrain
    I am having the hardest time trying to figure out this (should be) simple css: Website is here: http://mibsoftware.us/fct/index.php I'm simply trying to get my #leftcolumn and #maincolumn to be inside the #content_container, yet whatever I'm doing isn't working at all. I'd like for the #content_container to be a dynamic height since the height of #leftcolumn and #maincolumn change depending on the page you are on. From the framework of my css it should work fine, so I must be missing something in my .css file declaring these divs. Any help would be greatly appreciated, as this will be a great learning experience for me.

    Read the article

  • Clarification on ZVals

    - by Beachhouse
    I was reading this: http://www.dereleased.com/2011/04/27/the-importance-of-zvals-and-circular-references/ And there's an example that lost me a bit. $foo = &$bar; $bar = &$foo; $baz = 'baz'; $foo = &$baz; var_dump($foo, $bar); /* string(3) "baz" NULL */ If you’ve been following along, this should make perfect sense. $foo is created, and pointed at a ZVal location identified by $bar; when $bar is created, it points at the same place $foo was pointed. That location, of course, is null. When $foo is reassigned, the only thing that changes is to which ZVal $foo points; if we had assigned a different value to $foo first, then $bar would still retain that value. I learned to program in C. I understand that PHP is different and it uses ZVals instead of memory locations as references. But when you run this code: $foo = &$bar; $bar = &$foo; It seems to me that there would be two ZVals. In C there would be two memory locations (and the values would be of the opposite memory location). Can someone explain?

    Read the article

  • Birthday effect - clarification needed plz.

    - by Mark
    Please help interpret the Birthday effect as described in Wikipedia: A birthday attack works as follows: 1) Pick any message m and compute h(m). 2) Update list L. Check if h(m) is in the list L. 3) if (h(m),m) is already in L, a colliding message pair has been found. else save the pair (h(m),m) in the list L and go back to step 1. From the birthday paradox we know that we can expect to find a matching entry, after performing about 2^(n/2) hash evaluations. Does the above mean 2^(n/2) iterations through the above entire loop (i.e. 2^(n/2) returns to step 1), OR does it mean 2^(n/2) comparisons to individual items already in L.

    Read the article

  • .NET DataSource Clarification

    - by Steven
    I'm fairly new to database programming in .NET. If I want to call several existing queries from the same database for different tasks, should I have one DataSource per database, per database connection, or per query?

    Read the article

  • C# Dispose() -clarification

    - by nettguy
    When i call object.Dispose(); Will CLR immediately destroy the object from memory or mark the object for removal in it's next cycle?. We are calling GC.SuppressFinalize() immediately after Dispose(),Does it mean ,"Don't collect the object again for dispose,because it is already submitted to displose". Actually which generation is responsible for destruction ,i guess generation 2.

    Read the article

  • sql clarification

    - by JPro
    Can anyone please clarify what this query will return ? SELECT TestCase FROM MyTable WHERE Verdict = 'PASS' AND StartTime > DATE_SUB(NOW(), INTERVAL 2 MONTH)

    Read the article

  • Verification GetHashCode and Equals

    - by nettguy
    Does the following code show the correct implementation of overriding GetHashCode ? public class Person { public string fName{get;set;} public string lName { get; set; } public int age { get; set; } public override bool Equals(object obj) { Person p = obj as Person; if (p == null) return false; return ( p.GetType()==this.GetType() && p.fName==this.fName &&p.lName==this.lName && p.age==this.age ); } //I took the below code from Marc Gravell's post,I am not sure whether i have implemented it properly public override int GetHashCode() { unchecked { int hash = 13; hash = (hash * 7) + fName.GetHashCode(); hash = (hash * 7) + lName.GetHashCode(); hash = (hash * 7) + age.GetHashCode(); return hash; } } }

    Read the article

  • C++ reference variable again!!!

    - by kumar_m_kiran
    Hi All, I think most would be surprised about the topic again, However I am referring to a book "C++ Common Knowledge: Essential Intermediate Programming" written by "Stephen C. Dewhurst". In the book, he quotes a particular sentence (in section under Item 5. References Are Aliases, Not Pointers), which is as below A reference is an alias for an object that already exists prior to the initialization of the reference. Once a reference is initialized to refer to a particular object, it cannot later be made to refer to a different object; a reference is bound to its initializer for its whole lifetime Can anyone please explain the context of "cannot later be made to refer to a different object" Below code works for me, #include <iostream> using namespace std; int main(int argc, char *argv[]) { int i = 100; int& ref = i; cout<<ref<<endl; int k = 2000; ref = k; cout<<ref<<endl; return 0; } Here I am referring the variable ref to both i and j variable. And the code works perfectly fine. Am I missing something? I have used SUSE10 64bit linux for testing my sample program. Thanks for your input in advance.

    Read the article

  • Python urlparse, correct or incorrect?

    - by omfgroflmao
    Python's urlparse function parses an url into six components (scheme, netloc, path and others stuff) Now I've found that parsing "example.com/path/file.ext" return no netloc but a path "example.com/path/file.ext". Should't it be netloc = "example.com" and path = "/path/file.ext"? Do we really need a "://" to determine wether or not a netloc exists? Python's ticket: http://bugs.python.org/issue8284

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • Clarification of the difference between PCI memory addressing and I/O addressing?

    - by KevinM
    Could someone please clarify the difference between memory and I/O addresses on the PCI/PCIe bus? I understand that I/O addresses are 32-bit, limited to the range 0 to 4GB, and do not map onto system memory (RAM), and that memory addresses are either 32-bit or 64-bit. I get the impression that memory addressing must map onto available RAM, is this true? That if a PCI device wishes to transfer data to a memory address, that address must exist in actual system RAM (and is allocated during PCI configuration) and not virtual memory. So if a PCI device only needs to transfer a small amount of data at a time, where there is no advantage to putting it into RAM or using DMA, then I/O addressing is fine (e.g. a parallel port implemented on a PCI card). And why do I keep reading that PCI/PCIe I/O addressing is being deprecated in favour of memory addressing? Thanks!

    Read the article

  • Clarification required re use of .NET Assemblies in GAC - Use to use Globally?

    - by Cognize
    Hi, I've done much reading and experimentation today regarding sigining of assemblies, and their installation into the GAC via various methods (mscorcfg.msc / drag and drop). What I thought, was that once a file was in the GAC, you did not need to make references from projects in Visual studio. I know that you CAN make references via the usual Add Reference, Browse etc, but I thought it was automatic. Testing proves this not to be the case. I came across a forum post looking to achieve the same outcome that suggested adding to the machine.config file under system.web as below. This did not work, it in fact broke visual studio until I removed it. <assemblies> <add assembly="Blah.Framework.Logging, Version=1.0.3806.25580, Culture=neutral, PublicKeyToken=0beed4b631ebc3cd" /> </assemblies> What I want to know, is am I right in my assumed use of assemblies in the GAC, and is there a way of making them globally available?

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • How to implement a reminder application ,just a clarification ?

    - by hib
    Hello all, I know that the reminder application should display a badge or sound or kind of alert to the user . The only doubt I want to clear from you guys is , should I display the badge in some manual coding fashion (like [[uiapplication sharedApplication] setBadgeNumber:2])or I have to use Apple push notification service and push the notification from a provider server ??????? I just want to know to implement the reminder application in correct way . Also any links to tutorials or examples would be appreciable . Thanks ,

    Read the article

  • Need some clarification on the ANSI/SPARC 3-tier database architecture.

    - by Moonshield
    Hi there, I'm currently revising for a databases exam and looking over some past papers, but there's one question that I'm slightly unsure about and was wondering if someone could offer some assistance. "Describe EACH of the THREE levels of the ANSI SPARC 3 level architecture. Your answer should include the purpose of EACH of the schemas, the level of abstraction they provide and the software tools that would be used to access and support them." As I understand it (although please correct me if I'm wrong): the internal schema specifies the physical storage of the data; the conceptual schema specifies the structure of the database and the domains; and the external schemas are how the database is viewed by "users" (applications, etc.). As for the abstraction, I understand that the conceptual layer means that the physical data storage can be altered without the end user being affected, likewise the The bit that I'm not sure about is what tools are used to access and support each layer. Would the internal schema be handled by the DBMS, the conceptual schema handled by some sort of DDL interpreter and the external schema handled by a DML interpreter (or have I misunderstood what each level does)? Any assistance would be greatly appreciated. Thanks, Moonshield

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >