Search Results

Search found 60744 results on 2430 pages for 'why we write'.

Page 146/2430 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Why my own UIViewController can't detect touch?

    - by Tattat
    I have my OwnViewController, the viewDidLoad is like this: - (void)viewDidLoad { [super viewDidLoad]; UIImage *img = [UIImage imageWithContentsOfFile: [[NSBundle mainBundle] pathForResource:@"myImg" ofType:@"png"]]; CGRect cropRect = CGRectMake(175, 0, 175, 175); CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], cropRect); UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 175, 175)]; imageView.image = [UIImage imageWithCGImage:imageRef]; self.view = imageView; CGImageRelease(imageRef); } It works, and I have detect touches method like this: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"touchesBegan"); } But my UIView can't detect any touches. My Own UIViewController is a subclass of UIViewController. It is a little square view on the IB, why that can't detect touches? thx u.

    Read the article

  • CSharpCodeProvider: Why is a result of compilation out of context when debugging

    - by epitka
    I have following code snippet that i use to compile class at the run time. //now compile the runner var codeProvider = new CSharpCodeProvider( new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } }); string[] references = new string[] { "System.dll", "System.Core.dll", "System.Core.dll" }; CompilerParameters parameters = new CompilerParameters(); parameters.ReferencedAssemblies.AddRange(references); parameters.OutputAssembly = "CGRunner"; parameters.GenerateInMemory = true; parameters.TreatWarningsAsErrors = true; CompilerResults result = codeProvider.CompileAssemblyFromSource(parameters, template); Whenever I step through the code to debug the unit test, and I try to see what is the value of "result" I get an error that name "result" does not exist in current context. Why?

    Read the article

  • Why use multiple OpenGL context

    - by Luca
    For rendering I have a current GL context, associated to a window. In the case the application render multiple scenes (for example using accumulation or different viewports) I think it is ok to reuse the same context. My question, indeed, is: why should I use multiple GL context? I red on ARB_framebuffer_object extension spec that MakeCurrent call could be expansive, and in the case the ARB_framebuffer_object extension is present I can render on a generic buffer without using MakeCurrent. Apparently the only reason to use multiple GL context is to avoid to setup context state (pixel store, transfer, point size, polygon stipple...) or to have avaialable multiple render buffers configuration (one context with accumulation, another without). How to determine when is better an alternative context instead of setting context state? Thankyou all!

    Read the article

  • Why scala not allowing '$' identifier in case statement?

    - by Alex R
    this works as expected scala 3 match { case x:Int = 2*x } res1: Int = 6 why does this fail? scala 3 match { case $x:Int = 2*$x } :1: error: '=' expected but ':' found. 3 match { case $x:Int = 2*$x } ^ scala 3 match { case `$x`:Int = 2*$x } :1: error: '=' expected but ':' found. 3 match { case `$x`:Int = 2*$x } ^ scala 3 match { case `$x` : Int = 2*$x } :1: error: '=' expected but ':' found. 3 match { case `$x` : Int = 2*$x } '$' is supposed to be a valid identifier character, as demonstrated here: scala var y = 1 y: Int = 1 scala var $y = 2 $y: Int = 2 Thanks

    Read the article

  • Why would you want a case sensitive database?

    - by Khorkrak
    What are some reasons for choosing a case sensitive collation over a case insensitive one? I can see perhaps a modest performance gain for the DB engine in doing string comparisons. Is that it? If your data is set to all lower or uppercase then case sensitive could be reasonable but it's a disaster if you store mixed case data and then try to query it. You have then apply a lower() function on the column so that it'll match the corresponding lower case string literal. This prevents index usage in every dbms. So wondering why anyone would use such an option.

    Read the article

  • Why doesn't this Perl array sort work?

    - by Luke
    Why won't the array sort? CODE my @data = ('PJ RER Apts to Share|PROVIDENCE', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Condo|WEST WARWICK', 'PJ RER Condo|WARWICK'); foreach my $line (@data) { $count = @data; chomp($line); @fields = split(/\|/,$line); if ($fields[0] eq "PJ RER Apts to Share"){ @city = "\u\L$fields[1]"; @city_sort = sort (@city); print "@city_sort","\n"; } } print "$count","\n"; OUTPUT Providence Johnston Johnston Johnston 6

    Read the article

  • GDB hardware watchpoint very slow - why?

    - by Laurynas Biveinis
    On a large C application, I have set a hardware watchpoint on a memory address as follows: (gdb) watch *((int*)0x12F5D58) Hardware watchpoint 3: *((int*)0x12F5D58) As you can see, it's a hardware watchpoint, not software, which would explain the slowness. Now the application running time under debugger has changed from less than ten seconds to one hour and counting. The watchpoint has triggered three times so far, the first time after 15 minutes when the memory page containing the address was made readable by sbrk. Surely during those 15 minutes the watchpoint should have been efficient since the memory page was inaccessible? And that still does not explain, why it's so slow afterwards. The GDB is $ gdb --version GNU gdb (GDB) 7.0-ubuntu [...] Thanks in advance for any ideas as what might be the cause or how to fix/work around it.

    Read the article

  • Why protected superclass member cannot be accessed in a subclass function when passed as an argument

    - by KNoodles
    I get a compile error, which I'm slightly confused about. This is on VS2003. error C2248: 'A::y' : cannot access protected member declared in class 'A' class A { public: A() : x(0), y(0) {} protected: int x; int y; }; class B : public A { public: B() : A(), z(0) {} B(const A& item) : A(), z(1) { x = item.y;} private: int z; }; The problem is with x = item.y; The access is specified as protected. Why doesn't the constructor of class B have access to A::y?

    Read the article

  • Why is Yahoo Indexing Bot considered as "evil"?

    - by bigstylee
    After reading and commenting on this question PHP Library for Keeping your site index by Google, Bing, etc, I was curious to look at StackOverFlow's sitemap. This returned a 404 error which I am guessing is just a protected page by determining if your are a Index Bot or simply doesnt exists. This then lead me to look at the robots.txt for StackOverFlow. I was surprised to see the comment "Yahoo bot is evil" along with a couple other Indexing bots (Spinn3r and KSCrawler) . I am unfamilular with Spinn3r and KSCrawler but my question is, why are these bots (particular Yahoo) considered as evil? Surely any and all indexing of any Search Engine is a good thing?

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Rebol Multitasking with Async: why do I get Invalid port spec

    - by Rebol Tutorial
    I tried http://www.mail-archive.com/[email protected]/msg19437.html (I just changed to www.reboltutorial.com) : do http://www.rebol.it/giesse/async-protocol.r handler: func [port [port!] state [word! error!] /local tmp cmd] [ if error? :state [print mold disarm state return true] switch state [ connect [ ; do HTTP request insert port {GET /files/2009/10/word.png HTTP/1.0^M^JHost: www.reboltutorial.com^M^J^M^J} false ] read [false] write [false] close [ ; get data data: copy port close port ;print copy/part data find data "^M^J^M^J" data: to binary! find/tail data "^M^J^M^J" other/image: attempt [load data] other/text: "" show other false ] ] ] port: open async://www.reboltutorial.com:80 port/awake: :handler view layout [ across me: box 100x100 random 255.255.255 0:00:00.5 feel [ engage: func [f a e] [ if a = 'time [ me/color: random 255.255.255 show me ] ] ] other: box 100x100 255.255.255 "Downloading image..." Return Area 208x100 "You can type here while downloading." ] ] But I'm getting this error: >> port: open async://reboltutorial.com:80 ** Access Error: Invalid port spec: async://reboltutorial.com:80 ** Near: port: open async://reboltutorial.com:80 >> port/awake: :handler ** Script Error: port has no value ** Near: port/awake: :handler

    Read the article

  • Why is SVN better than VSS? [closed]

    - by tsilb
    I've heard soooo many people complain about VSS, and noooo people complaining about SVN. We use SVN on my work project. It's slow, regularly freezes up my IDE, and has wonky behavior like looking for a database server every time I right-click the Solution node in my Solution Explorer. When I used VSS, everything worked beautifully, except for access restrictions which I of course blame on the people who control access. VSS is built by Microsoft and thus has great integration with Visual Studio. SVN is written by pretty much anybody with some free time (right?) and thus kinda works most of the time... And I honestly get the impression they had a dozen different directions in the design instead of one. So why do I keep hearing that SVN is better than VSS?

    Read the article

  • Why won't Silverlight handle the conversion of my custom float property

    - by Jeff Weber
    In a Silverlight 4 project I have a class that extends Canvas: public class AppendageCanvas : Canvas { public float Friction { get; set; } public float Restitution { get; set; } public float Density { get; set; } } I use this canvas in Blend by dragging it onto another control and setting the custom properties: When I run the app, I get the following error when InitializeComponent is called on the control containing my custom canvas: I'm not sure why Silverlight isn't able to convert this property from it's string representation in Xaml, to the float that it is. Anyone have any ideas?

    Read the article

  • Why dont Android applications provide an "Exit" option?

    - by Howiecamp
    Is there something in the Android developer guidelines that disuadea developers from providing the option to "exit" (stop running) an application from within the application itself? I love multitasking and all but it's not clear to me why: the vast majority of apps don't have their own Exit functions and hence just keep running forever don't give you a choice about running when you turn on the phone - they just do by default Both of these things lead to memory usage constantly increasing and your device running with this performance burden all of the time despite the fact that you may only want certain apps to run some of the time. Am I missing something?

    Read the article

  • State pattern: Why doesn't the context class implement or inherit the State abstract interface/class

    - by Ricket
    I'm reading about the State pattern. I have only just begun, so of course I begin by reading the entire Wikipedia article on it. I noticed that both of the examples in the article have some base abstract class or Java interface for a generic State's methods/functions. Then there are some states which inherit from the base and implement those methods/functions in different ways. Then there's a Context class which has a private member of type State and which, at any time, can be equal to an instance of one of the implementations. That context class also implements the same methods, and passes them onto the current state instance, and then has an additional method to change the state (or depending on design I understand the change of state could be a reaction to one of the implemented methods). Why doesn't this context class specifically "extend" or "implement" the generic State base class/interface?

    Read the article

  • What kind of security issues will I have if I provide my web app write access?

    - by iama
    I would like to give my web application write access to a particular folder on my web server. My web app can create files on this folder and can write data to those files. However, the web app does not provide any interface to the users nor does it publicize the fact that it can create files or write to files. Am I susceptible to any security vulnerabilities? If so, what are they?

    Read the article

  • Actionscript: Why is drawRoundRectComplex() not documented?

    - by Chunk1978
    in studying actionscript 3's graphics class, i've come across the undocumented drawRoundRectComplex() method. it's a variant of drawRoundRect() but with 8 parameters, the final four being the diameter of each corner (x, y, width, height, top left, top right, bottom left, bottom right). //example var sp:Sprite = new Sprite(); sp.graphics.lineStyle(1, 0x000000); sp.graphics.drawRoundRectComplex(0, 0, 100, 50, 10, 20, 0, 10); addChild(sp); this seems to be a pretty useful method, so i'm just curious if anyone knows of any reasons why adobe chose not to document it?

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

  • Why does GetClusterShape return null when the cluster specification was retrieved through the GetClu

    - by Markus Olsson
    Suppose I have a virtual earth shape layer called shapeLayer1 (my creative energy is apparently at an alltime low). When i call the GetClusteredShapes method I get an array of VEClusterSpecification objects that represent each and every one of my currently visible clusters; no problem there. But when I call the GetClusterShape() method it returns null... null! Why on earth would it do that? I used firebug to confirm that the private variable of the VEClusterSpecification that's supposed to hold a reference to the shape is indeed null so it's not the method that's causing the problem. Some have suggested that this is actually documented behavior Returns null if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method But looking at the current MSDN documentation for the VEShape class it says: Returns if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method Is this a bug or a feature? Is there any known workarounds or (if it is a bug) some plan on when they are going to fix it?

    Read the article

  • Why is the base() constructor not necessary?

    - by Earlz
    Hello, I have a class structure like abstract class Animal { public Animal(){ //init stuff.. } } class Cat : Animal { public Cat(bool is_keyboard) : base() //NOTE here { //other init stuff } } Now then, look at the noted line. If you remove : base() then it will compile without an error. Why is this? Is there a way to disable this behavior? I have had multiple bugs now from forgetting the base() which I would have thought to be required on such a special thing as a constructor.

    Read the article

  • Why wont this perl array sort work?

    - by Luke
    Why wont the array sort? CODE my @data = ('PJ RER Apts to Share|PROVIDENCE', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Condo|WEST WARWICK', 'PJ RER Condo|WARWICK'); foreach my $line (@data) { $count = @data; chomp($line); @fields = split(/\|/,$line); if ($fields[0] eq "PJ RER Apts to Share"){ @city = "\u\L$fields[1]"; @city_sort = sort (@city); print "@city_sort","\n"; } } print "$count","\n"; OUTPUT Providence Johnston Johnston Johnston 6

    Read the article

  • Why is Read-Modify-Write necessary for registers on embedded systems?

    - by Adam Shiemke
    I was reading http://embeddedgurus.com/embedded-bridge/2010/03/different-bit-types-in-different-registers/, which said: With read/write bits, firmware sets and clears bits when needed. It typically first reads the register, modifies the desired bit, then writes the modified value back out and I have run into that consrtuct while maintaining some production code coded by old salt embedded guys here. I don't understand why this is necessary. When I want to set/clear a bit, I always just or/nand with a bitmask. To my mind, this solves any threadsafe problems, since I assume setting (either by assignment or oring with a mask) a register only takes one cycle. On the other hand, if you first read the register, then modify, then write, an interrupt happening between the read and write may result in writing an old value to the register. So why read-modify-write? Is it still necessary?

    Read the article

  • Why are action based web frameworks predominant?

    - by deamon
    Most web frameworks are still using the traditional action based MVC model. A controller recieves the request, calls the model and delegates rendering to a template. That is what Rails, Grails, Struts, Spring MVC ... are doing. The other category, the component based frameworks like Wicket, Tapestry, JSF, or ASP.Net Web Forms have become more popular over the last years, but my perception is that the traditional action based approach is far more popular. And even ASP .Net Web Forms has become a sibling name ASP .Net Web MVC. I think the kind of applications built with both types of frameworks is overlapping very much, so the question is: Why are action based frameworks so predominant?

    Read the article

  • Why is x86 ugly? aka Why is x86 considered inferior when compared to others?

    - by claws
    Hello, recently I've been reading some SO archives and encountered statements against x86 architecture. http://stackoverflow.com/questions/2667256/why-do-we-need-different-cpu-architecture-for-server-mini-mainframe-mixed-cor says "PC architecture is a mess, any OS developer would tell you that." http://stackoverflow.com/questions/82432/is-learning-assembly-language-worth-the-effort says "Realize that the x86 architecture is horrible at best" http://forums.anandtech.com/showthread.php?t=976577 says "Most colleges teach assembly on something like MIPS because it's much simpler to understand, x86 assembly is really ugly" and many more comments like Compared to most architectures, X86 sucks pretty badly. It's definitely the conventional wisdom that X86 is inferior to MIPS, SPARC, and PowerPC x86 is ugly I tried searching but didn't find any reasons. I don't find x86 bad probably because this is the only architecture I'm familiar with. Can someone kindly give me reasons for considering x86 ugly/bad/inferior compared to others.

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >