Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 39/1579 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Why is EXC_BAD_ACCESS so unhelpful?

    - by Dustin
    First let me say I come from a background in Flash/AS3, which I realize is not as strict about most things as iPhone/Objective-C. I suspect my question actually applies to AS3 as well, but let me ask it as pertaining to Obj-c. Why is the error EXC_BAD_ACCESS, and others like it, so unhelpful? I realize that it normally means mismanagement of memory somewhere, but why can't it tell you more about the problem. For instance why doesn't it say "EXC_BAD_ACCESS, you tried to pass pointer suchAndSuch on line 123, however you're an idiot, because you released it on line 69 so it's not available anymore"? I realize I can use the debugger to get more clues about where my error occurred, but many times this is only marginally helpful. For instance sometimes none of the messages in the stack/thread/whatever are even my code. Other times it is my code but on the top of the stack will be a message that has 4+ parameters, ok thanks debugger you narrowed it down to 4 possible pointers by why can't you just tell me which one!? I'm guessing there's just some fundamental explanation that I missed because of the background I came from, not needing to worry about memory and such. Although there is an error that can happen a lot in AS3 development that is equally mysterious and along the same lines. "Error #1009: Cannot access a property or method of a null object reference" which almost always means a variable you were expecting to be holding something is actually null. Why doesn't it tell me WHICH variable?!

    Read the article

  • why use ssh tunneling for mysql server?

    - by ajsie
    i've got ubuntu server acting as my lamp server for my php websites. mysql server is installed and opened for the localhost port. i have read about how to tunnel through ssh to my mysql server. but i havent understood why this is better than opening the mysql server directly for the internet port. cause in either way, a hacker could brute force the port for passwords. either mysql port (3306) if opened for the public or ssh (22) if using tunneling. so why is it better to use ssh tunneling for mysql (and many other server applications)?

    Read the article

  • Why is my ogg bigger then my m4a?

    - by acidzombie24
    i am using ffmpeg.exe -i 0123456789 -ab 192k out.m4a ffmpeg.exe -i 0123456789 -f wav - | oggenc2.exe - -r -q 6 -o out.ogg (0123456789 has no extension). My m4a output is 14,608kb while my ogg output is 19,809kb why? AFAIK -q 6 is roughly 192kb. So it should be about even. I could see one file being 1-3mb bigger then the other but 5 is pretty large. the m4a is almost 75% of the ogg! thats a lot! Why is this?

    Read the article

  • Why are they putting "processors" on hard drives?

    - by Celeritas
    What does it mean when they have a processor on the hard drive, how does it work, and what benfit does it have? I don't understand - the CPU is the processor and the hard drive transfers it's contents to RAM. Do have additional processors, preprocess the data some how? Here's some examples Western Digital WD Black WD1002FAEX 1TB "Dual processor speed" NETGEAR ReadyNAS 312 2-Bay Diskless Network Attached Storage "Dual-core Intel 2.1GHz processor and 2GB on-board memory" and routers now have processors too, why's that nescecary? I guess it sort of makes sense - some logic needs to happen for the packets to be read in to know which ports to send them out on, but why did old routers not need them? Example or wireless router with processor: "Dual-core processor"

    Read the article

  • Why is my ogg bigger than my m4a?

    - by acidzombie24
    I am using the following commands to encode an audio file to m4a & ogg formats: ffmpeg.exe -i 0123456789 -ab 192k out.m4a ffmpeg.exe -i 0123456789 -f wav - | oggenc2.exe - -r -q 6 -o out.ogg (0123456789 has no extension.) My m4a output is 14,608kB while my ogg output is 19,809kB. Why? AFAIK -q 6 is roughly 192kbps. So it should be about even. I could see one file being 1-3MB bigger than the other, but 5MB is pretty large. The m4a is almost 75% of the ogg! Why is this?

    Read the article

  • Why the good append syntax is so ugly, asks python newbie

    - by Cawas
    Now following my series of "python newbie questions" and based on another question. Go to http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables and scroll down to "Default Parameter Values". There you can find the following: def bad_append(new_item, a_list=[]): a_list.append(new_item) return a_list def good_append(new_item, a_list=None): if a_list is None: a_list = [] a_list.append(new_item) return a_list So, question here is: why is the "good" syntax over a known issue ugly like that in a programming language that promotes "elegant syntax" and "easy-to-use"? Why not just something in the definition itself, that the "argument" name is attached to a "localized" mutable object like: def better_append(new_item, a_list=[].local): a_list.append(new_item) return a_list I'm sure there would be a better way to do this syntax, but I'm also almost positive there's a good reason to why it hasn't been done. So, anyone happens to know why?

    Read the article

  • Why do browsers have so many possible exploits?

    - by Beau Martínez
    When browsing I am ocassionally given warnings about pages that host malware "that could damage my computer". I am seriously perplexed as to why, in 2010, browsers still have possible exploits and can be cracked. My question is "Why?". I'm assuming it's because of the quick development that occured in the browser wars which were unsufficiently tested, but I'm unsure. Surely WebKit would have patched all the issues in KHTML, or Gecko sorted out the flaws in Netscape's engine, and the IE coders sorted through their codebase to eliminate possible flaws? (Somewhat related: http://superuser.com/questions/117770/which-browser-is-the-most-secure-research-and-practically-based.)

    Read the article

  • Why DELL PowerConnect and Juniper are so rare ? Why do enterprises stick with Cisco ?

    - by Kedare
    Hello ! I have a little question, I'm actually studing IT in France, and when looking on alternative on the very [...] very expensive Cisco equipments, I've found Juniper and DELL PowerConnect pretty attractive on features and price, but I rarely see something else than the classics Cisco/LinkSys, HP Procurve and Netgear.. Why it's so rare to find those switch ? They looks really great but... I've never seen any Juniper or Powerconnect... Why do enterprises stick with the expensive Cisco ? I've tried to find how to buy both, it's quite easy with PowerConnect, everything is on the DELL website, but it looks it's very hard to find Juniper equipments in France :( Thank you !

    Read the article

  • Why use NoSQL over Materialized Views?

    - by JustinT
    There has been a lot of talk recently about NoSQL. The #1 reason why I hear people use NoSQL is because they start to de-normalize their DBMS data so much so, to increase performance, that they end up with just one table with all of their data within that single table. With Materialized Views however, you can keep your data normalized, yet have it stored as a single table view for the same reasons why you'd use NoSQL. As such, why would someone use NoSQL over Materialized Views?

    Read the article

  • Dijkstra's algorithm: why does it work? (not how)

    - by BeeBand
    I understand what Dijkstra's algorithm is but I don't understand why it works. When selecting the next vertice to examine, why does Dijkstra's algorithm select the one with the smallest weight? Why not just select a vertex arbitrarily, since the algorithm visits all vertices anyway?

    Read the article

  • Why use Django on Google App Engine?

    - by Travis Bradshaw
    When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google. To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django. It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack. Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus. I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. Thanks in advance for your time!

    Read the article

  • Why doesn't git commit -a add new files?

    - by splintor
    I'm a bit new to git, and I fail to understand why git commit -a only stages changed and deleted files but not new files. Can anyone explain why is it like this, and why there is no other commit flag to enable adding files and committing in one command? BTW, hg commit -A adds both new and deleted files to the commit

    Read the article

  • Why does my SQL Server use AWE memory? and why is this not visible in RAMMap?

    - by Marnix Klooster
    We have a Windows Server 2008 R2 (64-bit) 8GB server where, according to Sysinternals RAMMap, 2GB of memory is allocated using AWE. As far as I understood, this means that these pages stay in physical memory and are never pushed out. This causes other apps to be pushed out of physical memory. In RAMMap, on the Physical Pages tab, the Process column is empty for all of the AWE pages. We run SQL Server on that box, but (through SQL Server Management Studio, under Server Properties - Memory, under Server memory options) it says is configured not to use AWE. However, when stopping SQL Server, the AWE pages are suddenly gone. So it really is the culprit. So I have three questions: Why does RAMMap not know/show that a SQL Server process is responsible for that AWE memory? Why does SQL Server Management Studio say that AWE memory is not used? How do we make configure SQL Server to really not use AWE memory?

    Read the article

  • Why Do You Use Delphi?

    - by lkessler
    Nick Bradbury (the author of HomeSite, TopStyle and FeedDemon) just posted a fascinating explanation of why he uses Delphi: http://nick.typepad.com/blog/2009/07/why-i-use-delphi.html I'd like to know if there are other reasons. Why do you use Delphi? (I'm making this community wiki from the onset. I'm interested in hearing your answers, not in points.)

    Read the article

  • Why does DNS work the way it does?

    - by sabof
    This is a Canonical Question about DNS (Domain Name Service). If my understanding of the DNS system is correct, the .com registry holds a table that maps domains (www.example.com) to DNS servers. What is the advantage? Why not map directly to an IP address? If the only record that needs to change when I am configuring a DNS server to point to a different IP address, is located at the DNS server, why isn't the process instant? If the only reason for the delay are DNS caches, is it possible to bypass them, so I can see what is happening in real time?

    Read the article

  • Value objects in DDD - Why immutable?

    - by Hobbes
    I don't get why value objects in DDD should be immutable, nor do I see how this is easily done. (I'm focusing on C# and Entity Framework, if that matters.) For example, let's consider the classic Address value object. If you needed to change "123 Main St" to "123 Main Street", why should I need to construct a whole new object instead of saying myCustomer.Address.AddressLine1 = "123 Main Street"? (Even if Entity Framework supported structs, this would still be a problem, wouldn't it?) I understand (I think) the idea that value objects don't have an identity and are part of a domain object, but can someone explain why immutability is a Good Thing? EDIT: My final question here really should be "Can someone explain why immutability is a Good Thing as applied to Value Objects?" Sorry for the confusion! EDIT: To clairfy, I am not asking about CLR value types (vs reference types). I'm asking about the higher level DDD concept of Value Objects. For example, here is a hack-ish way to implement immutable value types for Entity Framework: http://rogeralsing.com/2009/05/21/entity-framework-4-immutable-value-objects. Basically, he just makes all setters private. Why go through the trouble of doing this?

    Read the article

  • Why .NET does not allow cross-thread operations?

    - by RHaguiuda
    This question is not about what is a cross-thread operation, and how to avoid it, but why internal mechanics of .NET framework does not allow a cross-thread operation. I can`t understand why a SerialPort DataReceived event cannot update a simple text box on my form and why using delegates this is possible?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >