Search Results

Search found 4940 results on 198 pages for 'understanding'.

Page 9/198 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Understanding the nop byte(s)

    - by Cole Johnson
    Ok, so I was reading through the AMD64 manuels and knowing that nop is really an xchg eax, eax, I looked at the xchg and found something interesting, that it seems a byte can be encoded into the instruction for specifying the registers (apologies I'm on my iPod): picture. So what I am wondering is how does the processor know if there is a byte after to work with or is it that that extra register has to be of type rAX causing it to actually still be the one byte 0x90

    Read the article

  • Understanding tcptraceroute versus http response

    - by kojiro
    I'm debugging a web server that has a very high wait time before responding. The server itself is quite fast and has no load, so I strongly suspect a network problem. Basically, I make a web request: wget -O/dev/null http://hostname/ --2013-10-18 11:03:08-- http://hostname/ Resolving hostname... 10.9.211.129 Connecting to hostname|10.9.211.129|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘/dev/null’ 2013-10-18 11:04:11 (88.0 KB/s) - ‘/dev/null’ saved [13641] So you see it took about a minute to give me the page, but it does give it to me with a 200 response. So I try a tcptraceroute to see what's up: $ sudo tcptraceroute hostname 80 Password: Selected device en2, address 192.168.113.74, port 54699 for outgoing packets Tracing the path to hostname (10.9.211.129) on TCP port 80 (http), 30 hops max 1 192.168.113.1 0.842 ms 2.216 ms 2.130 ms 2 10.141.12.77 0.707 ms 0.767 ms 0.738 ms 3 10.141.12.33 1.227 ms 1.012 ms 1.120 ms 4 10.141.3.107 0.372 ms 0.305 ms 0.368 ms 5 12.112.4.41 6.688 ms 6.514 ms 6.467 ms 6 cr84.phlpa.ip.att.net (12.122.107.214) 19.892 ms 18.814 ms 15.804 ms 7 cr2.phlpa.ip.att.net (12.122.107.117) 17.554 ms 15.693 ms 16.122 ms 8 cr1.wswdc.ip.att.net (12.122.4.54) 15.838 ms 15.353 ms 15.511 ms 9 cr83.wswdc.ip.att.net (12.123.10.110) 17.451 ms 15.183 ms 16.198 ms 10 12.84.5.93 9.982 ms 9.817 ms 9.784 ms 11 12.84.5.94 14.587 ms 14.301 ms 14.238 ms 12 10.141.3.209 13.870 ms 13.845 ms 13.696 ms 13 * * * … 30 * * * I tried it again with 100 hops, just to be sure – the packets never get there. So how is it that the server does respond to requests via http, even after a minute? Shouldn't all requests just die? I'm not sure how to proceed debugging why this server is slow (as opposed to why it responds at all).

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • Understanding how IE's SmartScreen works

    - by Kevin Donn
    Today I downloaded an update to our mail server on my dev machine using IE9 on Win7 Pro. I directed IE to save the file on our server's shared drive so I could install it later. When the download finished, IE showed a red banner at the bottom and said that, ".exe is not commonly downloaded and could harm your computer." There were three buttons, "Delete", "Actions", and "View downloads". I selected "Actions" just because I had never seen this before. It showed a "SmartScreen Filter" dialog basically giving three choices: "Don't run this program (recommended)", "Delete program", and "Run anyway". I just canceled the dialog because I didn't want to run it in the first place; I just wanted to download it so I could run it later on the server. So when I did try to run it, it would blow up immediately saying, "Setup was unable to create the directory - Error 5: Access is denied." I tried unblocking the file, "Run as Administrator" even though I already was Administrator, turning off UAC, etc. Cutting to the chase, I finally downloaded the file again, ran WinMerge on the two and it showed they were identical, except the new one ran fine. I went back to my dev machine, downloaded the file through Firefox and then ran it on the server, again fine. But when I tried again through IE, again SmartScreen showed its red banner and somehow clobbered the file even though it was stored on another machine, and WinMerge can't tell the difference between it and a good file. I've looked around on the web for how SmartScreen works, but they all give user-level descriptions of it. What I want to know is, what does it do to that file to make it unrunnable on another machine? Thanks

    Read the article

  • understanding mount -o bind

    - by Ionut
    Few questions after the following commands: mount -o bind /new_disk/home/user/ /home/user/ mount -o bind --no-mtab /new_disk/home/user/ /home/user/ What is the difference between the two commands other than " Mount without writing in /etc/mtab. This is necessary for example when /etc is on a read-only filesystem." What is the difference between mount -o bind and mount --bind ...if there are Let's suppose i don't know there is a partition mounted using -o bind --no-mtab...where can I find if there is any mound point with bind ? The only way i can detect this is grep user /proc/mounts but in that line there is no info abut bind. Thank you.

    Read the article

  • Understanding !d command in sed with respect to saves

    - by richardh
    I have a directory of tab-delimited text files and some have comments in the first few lines that I would like to delete. I know that the first good line starts with "Mark" so I can use /^Mark/,$!d to delete these comments. After this deletion I have several other replacements that I make in the (new) first line that has variable names. My question is, why do I have to save sed's output to get my script to work? I understand that if I line is deleted, then the output doesn't proceed downstream because there is no output. But if I don't delete (i.e., !d) then why do I have to save to file? Thanks! Here is my shell script. (I'm a sed newbie, so any other feedback is also appreciated.) #!/bin/bash for file in *.txt; do mv $file $file.old1 sed -e '/^Mark/,$!d' $file.old1 > $file.old2 sed -e '1s/\([Ss]\)hareholder/\1hrhldr/g'\ -e '1s/\([Ii]\)mmediate/\1mmdt/g'\ -e '1s/\([Nn]\)umber/\1o/g'\ -e '1s/\([Cc]\)ompany/\1o/g'\ -e '1s/\([Ii]\)nformation/\1nfo/g'\ -e '1s/\([Pp]\)ercentage/\1ct/g'\ -e '1s/\([Dd]\)omestic/\1om/g'\ -e '1s/\([Gg]\)lobal/\1lbl/g'\ -e '1s/\([Cc]\)ountry/\1ntry/g'\ -e '1s/\([Ss]\)ource/\1rc/g'\ -e '1s/\([Oo]\)wnership/\1wnrshp/g'\ -e '1s/\([Uu]\)ltimate/\1ltmt/g'\ -e '1s/\([Ii]\)ncorporation/\1ncorp/g'\ -e '1s/\([Tt]\)otal/\1ot/g'\ -e '1s/\([Dd]\)irect/\1ir/g'\ $file.old2 > $file rm -f $file.old* done

    Read the article

  • Understanding why a 450W power supply is destroying my ATX motherboards but not mATX

    - by T. Webster
    Maybe I'm missing something really obvious here, but after going through 3 new motherboards in trying to build my new machine, I'm finally certain that the 450W power supply I bought has been destroying them. It seems like it should be simple: just connect the connectors where they fit and switch the thing on. But today after hooking it to my ASUS Sabertooth 990FX TUF Series Motherboard and switching it on, I could see and smell the smoke coming out of the motherboard. What I don't get is why I can hook this same 450W power supply up to my mATX motherboard and not run into the same problem. What am I missing here? Where on the box or manual is this? (I have 8x2 GB RAM, AMD FX-4100 3.6GHz CPU, 1 512MB ATI graphics card, 1 WD 7200 RPM SATA hard drive. )

    Read the article

  • Understanding ESXi and Memory Usage

    - by John
    Hi, I am currently testing VMWare ESXi on a test machine. My host machine has 4gigs of ram. I have three guests and each is assigned a memory limit of 1 GB (and only 512 MB reserved). The host summary screen shows a memory capacity of 4082.55 MB and a usage of 2828 MB with two guests running. This seems to make sense, two gigs for each VM plus an overhead for the host. 800MB seems high but that is still reasonable. But on the Resource Allocation Screen I see a memory capacity of 2356 MB and an available capacity of 596 MB. Under the configuration tab, memory link I see a physical total of 4082.5 MB, System of 531.5 MB and VM of 3551.0 MB. I have only allocated my VMs for a gig each, and with two VMs running they are taking up almost two times the amount of ram allocated. Why is this, and why does the Resource Allocation screen short change me so much?

    Read the article

  • understanding my site's DNS records

    - by DaveM
    firstly apologies for using the word 'pointage' this is the word my french domain registrar uses so I may have used to wrong term. OK I would like to better understand what is going on on my 'pointage' record on my domain registrars site. for my (currently empty) web site it reports the following details... Type : Host : Destination A : www.mydomain.org : 62.210.176.146 A : mail.mydomain.org : 84.246.225.176 Mx : .mydomain.org : mail.mydomain.org I think I understand the MX record, that simply relays anything onto the mail.mydomain.org location. However why are the destination for the www and mail domains different. Even more confusing (for me) is the fact that if I ping either of www.mydomain.org or mail.mydomain.org the ping returns a different IP address. This IP address is consistent with that of my server (ie 92.39.247.92). So what exactly is going on ? I'm sure I could find the information on the web,I've read a few thing on the debianhelp site regarding DNS records, and it seems to suggest that the record should be a reverse lookup, but certains isn't the reverse of my servers IP ? but I don't what I should be looking for, so links to docs and search terms for google will be happily accepteed (even though they go against the grain of SO answers to question). thanks in advance. David. ps. I should add that everything seems to work just fine, and I've just descovered this part of the management page of my registrar. Edit: Addition of DNS records and ping results. The DNS record for the site. From what I've read there should only realy be a single 'A' record, so has something gone wrong ? should I change it (remove the extras and then just point www.facilitee.org - .facilitee.org and mail.facilitee.org - .facilitee.org here is the DNS record A www.facilitee.org ? 92.39.247.92 A .facilitee.org ? 92.39.247.92 A mail.facilitee.org ? 92.39.247.92 A webmail.facilitee.org ? 92.39.247.92 MX .facilitee.org ? mail.facilitee.org ping results... ~$ ping www.facilitee.org PING www.facilitee.org (92.39.247.92) 56(84) bytes of data. 64 bytes from vps4576-cloud.dns26.com (92.39.247.92): ~$ ping mail.facilitee.org PING mail.facilitee.org (92.39.247.92) 56(84) bytes of data. 64 bytes from vps4576-cloud.dns26.com (92.39.247.92): So the DNS and the ping correspond, but the 'pointage' doesn't. ~ how can I get a report of the pointage records other than from my registrar ?

    Read the article

  • Understanding connection tracking in iptables

    - by Matt
    I'm after some clarification of the state/connection tracking in iptables. What is the difference between these rules? iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT Is connection tracking turned on when a packet is first matched containing -m state --state BLA , or is connection tracking always on? Can/Should connection state be used for fast matching like below? e.g. suppose this is some sort of router/firewall (no nat). # Default DROP policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Drop invalid iptables -A FORWARD -m state --state INVALID -j DROP # Accept established,related connections iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow ssh through, track connection iptables -A FORWARD -p tcp --syn --dport 22 -m state --state NEW -j ACCEPT

    Read the article

  • Understanding exim configuration files

    - by bobobobo
    So, I want to understand what's going on with this Exim configuration directory. In /etc/exim4, there's: * exim4.conf.template * update-exim4.conf.conf * conf.d The conf.d has a mess of directories and files, and inside each are a bunch of if statements which I find really different. For example: maildir_home: debug_print = "T: maildir_home for $local_part@$domain" driver = appendfile .ifdef MAILDIR_HOME_MAILDIR_LOCATION directory = MAILDIR_HOME_MAILDIR_LOCATION .else directory = $home/Maildir .endif .ifdef MAILDIR_HOME_CREATE_DIRECTORY create_directory .endif .ifdef MAILDIR_HOME_CREATE_FILE My question is, where do the CAPS VARIABLES get defined how can I change them why are there so many if statements in these configuration files?

    Read the article

  • NAT : understanding about interconnection

    - by PITCHY
    English version below J'ai 2 routeurs A et B relié en série avec les ip respectives ( 10.0.0.1/30 10.0.0.2/30) sur le routeur A j'ai activé la fonction NAT avec un pool (200.0.0.1 - 200.0.0.15/28). Lorsque je sors je prends donc un ip du pool par exemple 200.0.0.10. Comment ça fonctionne sachant que ma nouvelle ip (200.0.0.10) ne se trouve pas sur le meme réseau que mon interface de destination (10.0.0.2)? English: I have 2 routers A and B, interconnected with a serial connection, with the ip's 10.0.0.1/30 for A and 10.0.0.2/30 for B. On router A NAT was activated with the pool 200.0.0.1 - 200.0.0.15/28. When connection to this router, I get an ip from the pool, for example 200.0.0.10. Knowing my new ip is 200.0.0.10, which is not on the same network as my destination interface (10.0.0.2), how can this work?

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Understanding Netbook Partitions & UNR Installation

    - by Wesley
    Hi all, I have a Samsung N120 netbook (with upgraded 2GB RAM). I'm just looking at the Disk Management right now (in Windows XP) and I'm trying to understand what partition holds what. There is "Local Disk (C:)" which is 40GB, "RECOVERY" (no drive letter) which is 6GB and then "TEMP_PART01 (D:)" which is 103.05GB. XP is installed on Local Disk (C:) and I've only used this hard drive for all my files, etc. Recovery is recovery... probably not removable anyways. Now, what bugs me is the TEMP_PART01 (D:) partition, which contains quite a bit of random junk, such as EULA text documents, an "external installer", UI Wrapper Resource DLLs, a "VC_RED" Windows Installer Package and a few more files. I have no clue what any of it means, but I'm assuming that this was probably stuff that could have been on the Local Disk (C:), along with the WINDOWS, Program Files, and Docs and Settings folder. So, how should I go about this? Should I have kept all my data on D: and left all OS related files/folders on C:? Now, I want to install Ubuntu Netbook Remix. Question is, will this install within Windows, if I want to dual boot it? If not, would I partition D: into two small chunks, one on which I would install UNR? There are basically two questions in here, but it'd be great to get answers for both! Thanks in advance.

    Read the article

  • Understanding CNAME in displaydns

    - by dublintech
    On windows, I do ipconfig /displaydns One record is: na4.salesforce.com ---------------------------------------- Record Name . . . . . : na4.salesforce.com Record Type . . . . . : 5 Time To Live . . . . : 8 Data Length . . . . . : 8 Section . . . . . . . : Answer CNAME Record . . . . : na4-was.salesforce.com I see not IP for it. How does windows resolve the IP for this then? Note: there is no other entry for na-4-was.salesforce.com. Thanks,

    Read the article

  • Understanding RAM usage on Linux

    - by stebbo
    I'm completely new to Linux and I'm just trying to understand where all my RAM is going. I've got a pretty fresh install of Xubuntu running as a VMWare guest, and I've given it 1.5GB RAM to play with. After only running two apps starting up Tomcat servers and also running Firefox, I've got hardly anything left. 160MB according to free -m. Looking at the output from Top, I see Java appearing twice, each stealing about 1/2 Gig resident memory. Both Tomcat instances use the same jdk, I would have thought I'd only see Java there once. What's the story? I had a screenshot but unfortunately couldn't post it being under 10 rep. Update The free -m output requested: total used free shared buffers cached Mem: 1419 1380 39 0 8 111 -/+ buffers/cache: 1259 160 Swap: 509 68 441 Top (coming)

    Read the article

  • Understanding ServiceKnownType in WCF

    - by SLC
    I am having a little trouble understanding ServiceKnownType in WCF. Taken from this blog, the following code does not work: [DataContract(Namespace = “http://mycompany.com/”)] public class Shape{…} [DataContract(Namespace = “http://mycompany.com/”)] public class Circle : Shape {…} [ServiceContract] public interface IMyServer { [OperationContract] bool AddShape(Shape shape); } . IMyServer client = new ChannelFactory<IMyServer>(binding, endPoint).CreateChannel(); client.AddShape(new Circle()); The reason it doesn't work is because you are trying to add a circle, but the servicecontract only allows a Shape. You are supposed to do something with knowntypes, but I am a bit confused about how that works. Since that code is in the service, why doesn't it know automatically that a Circle is derived from Shape? Additionally, what does ServiceKnownType actually do? When ServiceKnownType is put below the DataContract, apparently that makes it work. I am guessing it says hey, this particular object type called Shape can also be a Circle. I am having trouble understanding why it would do it this way around, because if you add a new type like Square you are going to have to add that too, wouldn't it make sense if it cannot infer it, to put the KnownType onto the Square rather than the Shape? So the Square says hey, I am a Shape, and you don't have to fiddle with the Shape class?

    Read the article

  • Understanding how rpmbuild works

    - by ereOn
    Hi, For my work, I have to create a documentation on "How-to create a RPM package on Red Hat 5". I'm used to Debian and it's derivative (Ubuntu, and so on) and thus to Debian packages (aka. .deb files). It seems that the RPM logic is quite different from what I know already and I am having some issues understanding the "RPM logic". From what I read, it seems that ones need to be root to create a RPM package. While I understand why root could be required to install a package, I still don't understand why elevated privileges should be needed to just create one. If I try to create a RPM package as a user, changing the buildroot it fails on the %installstep because I don't have permission to write files into /usr/bin. Fair enough but... why does he want to copy my files into /usr/bin at this step ?! I just want to create the package, not install it ! I'm sure I'm missing something here. Is there anyone who could give me at least a basic understanding of how rpmbuild works and why ? Thank you very much !

    Read the article

  • Understanding OOP Principles in passing around objects/values

    - by Hans
    I'm not quite grokking a couple of things in OOP and I'm going to use a fictional understanding of SO to see if I can get help understand. So, on this page we have a question. You can comment on the question. There are also answers. You can comment on the answers. Question - comment - comment - comment Answer -comment Answer -comment -comment -comment Answer -comment -comment So, I'm imagining a very high level understanding of this type of system (in PHP, not .Net as I am not yet familiar with .Net) would be like: $question = new Question; $question->load($this_question_id); // from the URL probably echo $question->getTitle(); To load the answers, I imagine it's something like this ("A"): $answers = new Answers; $answers->loadFromQuestion($question->getID()); // or $answers->loadFromQuestion($this_question_id); while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } Or, would you do ("B"): $answers->setQuestion($question); // inject the whole obj, so we have access to all the data and public methods in $question $answers->loadFromQuestion(); // the ID would be found via $this->question->getID() instead of from the argument passed in while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } I guess my problem is, I don't know when or if I should be passing in an entire object, and when I should just be passing in a value. Passing in the entire object gives me a lot of flexibility, but it's more memory and subject to change, I'd guess (like a property or method rename). If "A" style is better, why not just use a function? OOP seems pointless here. Thanks, Hans

    Read the article

  • High memory usage for dummies

    - by zaf
    I've just restarted my firefox web browser again because it started stuttering and slowing down. This happens every other day due to (my understanding) of excessive memory usage. I've noticed it takes 40M when it starts and then, by the time I notice slow down, it goes to 1G and my machine has nothing more to offer unless I close other applications. I'm trying to understand the technical reasons behind why its such a difficult problem to sol ve. Mozilla have a page about high memory usage: http://support.mozilla.com/en-US/kb/High+memory+usage But I'm looking for a slightly more in depth and satisfying explanation. Not super technical but enough to give the issue more respect and please the crowd here. Some questions I'm already pondering (they could be silly so take it easy): When I close all tabs, why doesn't the memory usage go all the way down? Why is there no limits on extensions/themes/plugins memory usage? Why does the memory usage increase if it's left open for long periods of time? Why are memory leaks so difficult to find and fix? App and language agnostic answers also much appreciated.

    Read the article

  • Understanding AddHandler and pass delegates and events.

    - by Achilles
    I am using AddHandler to wire a function to a control's event that I dynamically create: Public Sub BuildControl(EventHandler as System.Delegate) dim objMyButton as new button AddHandler objMyButton.Click, EventHandler end Sub This code is generating a run-time exception stating: Unable to cast object of type 'MyEventHandlerDelegate' to type 'System.EventHandler' What am I not understanding about System.Delegate even though AddHandler takes as an argument of type "System.Delegate"? What Type does "EventHandler need to be to cast to a type that AddHandler can accept? Thanks for your help!

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

  • How SMTP server works? need an understanding

    - by Rajeev
    Hi, I am looking for understanding on how SMTP server works in an environment? example, if I wish to run only SMTP server on windows 2008 for an explicit application, which is running on it's web serer, app. server and DB server. Do i need to register the domain so as to send emails from my domain, if i wish to send emails to some users from that SMTP server.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >