Search Results

Search found 129 results on 6 pages for 'dup'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • Changing email application in Preferred Applications to GMail?

    - by grm
    I'm trying to change the Preferred Application for email. I have installed the package desktop-webmail, but there is no new option under System - Preferences - Preferred Application as you would expect, infact, there is only one option there, only Evolution. According to this post it should be possible to set a custom application, but no option is available. Is it possible to setup GMail as Preferred email app so that File - Send by email works in gnome apps? This seems to be a dup of another post here, Thing is that this works fine in 10.10, but in 11.04 this method no longer work. My post above is meant for 11.04 and the question is still valid.

    Read the article

  • Ubuntu One Windows application only accessing gpg files

    - by Boomer Kuwanger
    I'm on a windows 7 (64) machine right now, I have the Ubuntu One windows application. I'm synced to my online account, the folder I am accessing is... deja-dup\My-desktop When I click 'Sync Locally' checkbox and explore my folder I am only able to see three files of the form duplicity-full.#######.manifest.gpg duplicity-full.#######.vol1.difftar.gpg duplicity-full-signatures.#######.sigtar.gpg How do I access the content of these file? I put them on a linux server and decrypted them/ extracted them, however something is wrong. Note: I cannot use apt-get on the linux server I'm using. Is there a way to access these files using the Ubuntu one software for windows? Many Thanks, Boomerkuwanger

    Read the article

  • programm and user data from an old hdd to a new installation on ssd

    - by hans wurst
    i tried everything since days, now i ask. i got a ubuntu system on my old hdd, wich is connectet via usb to this system and a sdd is build in my notebook. at the moment i am running a ubuntusystem from an usb stick. it have tried to clone my disk(cahnge uiid, etc), to transport the data via deja dup and much more. the result was nothing or strange things. my idea is to copy the important data form the old system to the new(home and whatever), but it is not allowed to do this. somebody her who know a tool wich can do this, or got an an other idea.

    Read the article

  • Will a rel=canonical link pointing to a 301 redirect pass less pagerank than one without a 301?

    - by tobek
    On this official Google page about canonical links it says: Can rel="canonical" be a redirect? Yes, you can specify a URL that redirects as a canonical URL. Google will then process the redirect as usual and try to index it. There is no mention that this might dilute the impact of the canonical link. However, Google has made clear elsewhere that 301 redirects do dilute PageRank - roughly as much as a link dilutes PageRank. Is that relevant here? I'm assuming the answer is "no" but I wanted to confirm. Relevant but not duplicate: Does Rel=Canonical Pass PR from Links or Just Fix Dup Content.

    Read the article

  • Is it possible to use Back in Time with Ubuntu One without storing the back up files locally?

    - by leousa
    I use Deja Dup as my back up tool. Love its simplicity and the fact that I can store my back up directly on my Ubuntu One account. Now I really like the flexibility of Back in Time, but I can not find a clean way to store my back up in Ubuntu One. Some threads suggest to use the Ubuntu One folder in your system, and that works, but it also keeps a local copy of the back up in my system, and I do not want that. Any work around for this? Thanks in advance

    Read the article

  • Is it possible to boot without mounting /home?

    - by Exeleration-G
    I want to backup my /home partition on /dev/sda6 using partclone, a command line utility. To do so, I first have to unmount the partition that I want to backup. Most of the time, this is easy, but /home is used by so many processes that it can't be unmounted without first killing all those processes. So, the thing I'm looking for is a way to boot Ubuntu, without mounting /home, so I can back up the not-mounted /dev/sda6 partition. Is that possible? To be clear, it would be nice if this special boot could be 'one-time-only'. So I'm not looking for ways to change /etc/fstab in such way that /dev/sda6 won't be mounted. That's because that would require me to change /etc/fstab twice each time just to make a backup. I'm aware of the fact that there are other backup solutions available, such as deja-dup. I'd like to use partclone, though.

    Read the article

  • Should single purpose utility app use a class

    - by jmoreno
    When writing a small utility app, that does just one thing, should that one thing be encapsulated in a seperate class, or just let it be part of whatever class/module is used to start the application? I.e. Main would consist of 2 or three lines calling the constructor and then the DoIt methods, nothing else. Or should Main be the DoIt method, with whatever functions it needs added to the main class? Asking because I want to get some alternative perspective, but couldn't find a similar question. If my google-fu is bad and there's a dup, please close.

    Read the article

  • FreeBSD 8.1 unstable network connection

    - by frankcheong
    I have three FreeBSD 8.1 running on three different hardware and therefore consist of different network adapter as well (bce, bge and igb). I found that the network connection is kind of unstable which I have tried to scp some 10MB file and found that I cannot always get the files completed successfully. I have further checked with my network admin and he claim that the problem is being caused by the network driver which cannot support the load whereby he tried to ping using huge packet size (around 15k) and my server will drop packet consistently at a regular interval. I found that this statement may not be valid since the three server is using three different network drive and it would be quite impossible that the same problem is being caused by three different network adapter and thus different network driver. Since then I have tried to tune up the performance by playing around with the /etc/sysctl.conf figures with no luck. kern.ipc.somaxconn=1024 kern.ipc.shmall=3276800 kern.ipc.shmmax=1638400000 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Required by pf net.inet.ip.forwarding=1 #Network Performance Tuning kern.ipc.maxsockbuf=16777216 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216 # Setting specifically for 1 or even 10Gbps network net.local.stream.sendspace=262144 net.local.stream.recvspace=262144 net.inet.tcp.local_slowstart_flightsize=10 net.inet.tcp.nolocaltimewait=1 net.inet.tcp.mssdflt=1460 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.sendspace=262144 net.inet.tcp.recvspace=262144 net.inet.udp.recvspace=262144 kern.ipc.maxsockbuf=16777216 kern.ipc.nmbclusters=32768 net.inet.tcp.delayed_ack=1 net.inet.tcp.delacktime=100 net.inet.tcp.slowstart_flightsize=179 net.inet.tcp.inflight.enable=1 net.inet.tcp.inflight.min=6144 # Reduce the cache size of slow start connection net.inet.tcp.hostcache.expire=1 Our network admin also claim that they see quite a lot of network up and down from their cisco switch log while I cannot find any up down message inside the dmesg. Have further checked the netstat -s but dont have concrete idea. tcp: 133695291 packets sent 39408539 data packets (3358837321 bytes) 61868 data packets (89472844 bytes) retransmitted 24 data packets unnecessarily retransmitted 0 resends initiated by MTU discovery 50756141 ack-only packets (2148 delayed) 0 URG only packets 0 window probe packets 4372385 window update packets 39781869 control packets 134898031 packets received 72339403 acks (for 3357601899 bytes) 190712 duplicate acks 0 acks for unsent data 59339201 packets (3647021974 bytes) received in-sequence 114 completely duplicate packets (135202 bytes) 27 old duplicate packets 0 packets with some dup. data (0 bytes duped) 42090 out-of-order packets (60817889 bytes) 0 packets (0 bytes) of data after window 0 window probes 3953896 window update packets 64181 packets received after close 0 discarded for bad checksums 0 discarded for bad header offset fields 0 discarded because packet too short 45192 discarded due to memory problems 19945391 connection requests 1323420 connection accepts 0 bad connection attempts 0 listen queue overflows 0 ignored RSTs in the windows 21133581 connections established (including accepts) 21268724 connections closed (including 32737 drops) 207874 connections updated cached RTT on close 207874 connections updated cached RTT variance on close 132439 connections updated cached ssthresh on close 42392 embryonic connections dropped 72339338 segments updated rtt (of 69477829 attempts) 390871 retransmit timeouts 0 connections dropped by rexmit timeout 0 persist timeouts 0 connections dropped by persist timeout 0 Connections (fin_wait_2) dropped because of timeout 13990 keepalive timeouts 2 keepalive probes sent 13988 connections dropped by keepalive 173044 correct ACK header predictions 36947371 correct data packet header predictions 1323420 syncache entries added 0 retransmitted 0 dupsyn 0 dropped 1323420 completed 0 bucket overflow 0 cache overflow 0 reset 0 stale 0 aborted 0 badack 0 unreach 0 zone failures 1323420 cookies sent 0 cookies received 1864 SACK recovery episodes 18005 segment rexmits in SACK recovery episodes 26066896 byte rexmits in SACK recovery episodes 147327 SACK options (SACK blocks) received 87473 SACK options (SACK blocks) sent 0 SACK scoreboard overflow 0 packets with ECN CE bit set 0 packets with ECN ECT(0) bit set 0 packets with ECN ECT(1) bit set 0 successful ECN handshakes 0 times ECN reduced the congestion window udp: 5141258 datagrams received 0 with incomplete header 0 with bad data length field 0 with bad checksum 1 with no checksum 0 dropped due to no socket 129616 broadcast/multicast datagrams undelivered 0 dropped due to full socket buffers 0 not for hashed pcb 5011642 delivered 5016050 datagrams output 0 times multicast source filter matched sctp: 0 input packets 0 datagrams 0 packets that had data 0 input SACK chunks 0 input DATA chunks 0 duplicate DATA chunks 0 input HB chunks 0 HB-ACK chunks 0 input ECNE chunks 0 input AUTH chunks 0 chunks missing AUTH 0 invalid HMAC ids received 0 invalid secret ids received 0 auth failed 0 fast path receives all one chunk 0 fast path multi-part data 0 output packets 0 output SACKs 0 output DATA chunks 0 retransmitted DATA chunks 0 fast retransmitted DATA chunks 0 FR's that happened more than once to same chunk 0 intput HB chunks 0 output ECNE chunks 0 output AUTH chunks 0 ip_output error counter Packet drop statistics: 0 from middle box 0 from end host 0 with data 0 non-data, non-endhost 0 non-endhost, bandwidth rep only 0 not enough for chunk header 0 not enough data to confirm 0 where process_chunk_drop said break 0 failed to find TSN 0 attempt reverse TSN lookup 0 e-host confirms zero-rwnd 0 midbox confirms no space 0 data did not match TSN 0 TSN's marked for Fast Retran Timeouts: 0 iterator timers fired 0 T3 data time outs 0 window probe (T3) timers fired 0 INIT timers fired 0 sack timers fired 0 shutdown timers fired 0 heartbeat timers fired 0 a cookie timeout fired 0 an endpoint changed its cookiesecret 0 PMTU timers fired 0 shutdown ack timers fired 0 shutdown guard timers fired 0 stream reset timers fired 0 early FR timers fired 0 an asconf timer fired 0 auto close timer fired 0 asoc free timers expired 0 inp free timers expired 0 packet shorter than header 0 checksum error 0 no endpoint for port 0 bad v-tag 0 bad SID 0 no memory 0 number of multiple FR in a RTT window 0 RFC813 allowed sending 0 RFC813 does not allow sending 0 times max burst prohibited sending 0 look ahead tells us no memory in interface 0 numbers of window probes sent 0 times an output error to clamp down on next user send 0 times sctp_senderrors were caused from a user 0 number of in data drops due to chunk limit reached 0 number of in data drops due to rwnd limit reached 0 times a ECN reduced the cwnd 0 used express lookup via vtag 0 collision in express lookup 0 times the sender ran dry of user data on primary 0 same for above 0 sacks the slow way 0 window update only sacks sent 0 sends with sinfo_flags !=0 0 unordered sends 0 sends with EOF flag set 0 sends with ABORT flag set 0 times protocol drain called 0 times we did a protocol drain 0 times recv was called with peek 0 cached chunks used 0 cached stream oq's used 0 unread messages abandonded by close 0 send burst avoidance, already max burst inflight to net 0 send cwnd full avoidance, already max burst inflight to net 0 number of map array over-runs via fwd-tsn's ip: 137814085 total packets received 0 bad header checksums 0 with size smaller than minimum 0 with data size < data length 0 with ip length > max ip packet size 0 with header length < data size 0 with data length < header length 0 with bad options 0 with incorrect version number 1200 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 300 packets reassembled ok 137813009 packets for this host 530 packets for unknown/unsupported protocol 0 packets forwarded (0 packets fast forwarded) 61 packets not forwardable 0 packets received for unknown multicast group 0 redirects sent 137234598 packets sent from this host 0 packets sent with fabricated ip header 685307 output packets dropped due to no bufs, etc. 52 output packets discarded due to no route 300 output datagrams fragmented 1200 fragments created 0 datagrams that can't be fragmented 0 tunneling packets that can't find gif 0 datagrams with bad address in header icmp: 0 calls to icmp_error 0 errors not generated in response to an icmp message Output histogram: echo reply: 305 0 messages with bad code fields 0 messages less than the minimum length 0 messages with bad checksum 0 messages with bad length 0 multicast echo requests ignored 0 multicast timestamp requests ignored Input histogram: destination unreachable: 530 echo: 305 305 message responses generated 0 invalid return addresses 0 no return routes ICMP address mask responses are disabled igmp: 0 messages received 0 messages received with too few bytes 0 messages received with wrong TTL 0 messages received with bad checksum 0 V1/V2 membership queries received 0 V3 membership queries received 0 membership queries received with invalid field(s) 0 general queries received 0 group queries received 0 group-source queries received 0 group-source queries dropped 0 membership reports received 0 membership reports received with invalid field(s) 0 membership reports received for groups to which we belong 0 V3 reports received without Router Alert 0 membership reports sent arp: 376748 ARP requests sent 3207 ARP replies sent 245245 ARP requests received 80845 ARP replies received 326090 ARP packets received 267712 total packets dropped due to no ARP entry 108876 ARP entrys timed out 0 Duplicate IPs seen ip6: 2226633 total packets received 0 with size smaller than minimum 0 with data size < data length 0 with bad options 0 with incorrect version number 0 fragments received 0 fragments dropped (dup or out of space) 0 fragments dropped after timeout 0 fragments that exceeded limit 0 packets reassembled ok 2226633 packets for this host 0 packets forwarded 0 packets not forwardable 0 redirects sent 2226633 packets sent from this host 0 packets sent with fabricated ip header 0 output packets dropped due to no bufs, etc. 8 output packets discarded due to no route 0 output datagrams fragmented 0 fragments created 0 datagrams that can't be fragmented 0 packets that violated scope rules 0 multicast packets which we don't join Input histogram: UDP: 2226633 Mbuf statistics: 962679 one mbuf 1263954 one ext mbuf 0 two or more ext mbuf 0 packets whose headers are not continuous 0 tunneling packets that can't find gif 0 packets discarded because of too many headers 0 failures of source address selection Source addresses selection rule applied: icmp6: 0 calls to icmp6_error 0 errors not generated in response to an icmp6 message 0 errors not generated because of rate limitation 0 messages with bad code fields 0 messages < minimum length 0 bad checksums 0 messages with bad length Histogram of error messages to be generated: 0 no route 0 administratively prohibited 0 beyond scope 0 address unreachable 0 port unreachable 0 packet too big 0 time exceed transit 0 time exceed reassembly 0 erroneous header field 0 unrecognized next header 0 unrecognized option 0 redirect 0 unknown 0 message responses generated 0 messages with too many ND options 0 messages with bad ND options 0 bad neighbor solicitation messages 0 bad neighbor advertisement messages 0 bad router solicitation messages 0 bad router advertisement messages 0 bad redirect messages 0 path MTU changes rip6: 0 messages received 0 checksum calculations on inbound 0 messages with bad checksum 0 messages dropped due to no socket 0 multicast messages dropped due to no socket 0 messages dropped due to full socket buffers 0 delivered 0 datagrams output netstat -m 516/5124/5640 mbufs in use (current/cache/total) 512/1634/2146/32768 mbuf clusters in use (current/cache/total/max) 512/1536 mbuf+clusters out of packet secondary zone in use (current/cache) 0/1303/1303/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 1153K/9761K/10914K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/8/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines Anyone got an idea what might be the possible cause?

    Read the article

  • How to force reload all vendor/plugins in rails 2.3 (development mode)

    - by tsdbrown
    We have an application with a app/model that references another model stored in a plugin. When the app/model level is reloaded on the second and further requests and that relies on our model in vendor/plugins/... (which stays loaded) it fails (can't dup nil class). We've tried setting config.reload_plugins = true in the development.rb but this doesn't seem to do it. Does anybody know a way to handle this?

    Read the article

  • Reading a string in TASM x86 assembly

    - by I_S_W
    hi all , i am trying to read a string from user in TASM assembly , i know i need a buffer to hold the input , max. length and actual length , but i seem to forgot how exactly we declare a buffer my attempt was smth like this Buffer db 80 ;max length db ? ;actual length db 80 dup(0);i think here is my problem but can't remember the right format Thanks in advance

    Read the article

  • Creating Haskell instance declarations

    - by btl
    Hello, complete noob to Haskell here with probably an even noobier question. I'm trying to get ghci output working and am stuck on instance declarations. How could I declare an instance for "(Show (Stack - Stack))" given: data Cmd = LD Int | ADD | MULT | DUP deriving Show type Prog = [Cmd] type Stack = [Int] type D = Stack -> Stack I've been trying to create a declaration like: instance Show D where show = Stack but all my attempts have resulted in illegal instance declarations. Any help and/or references much appreciated!

    Read the article

  • Replace all & that are not a HTML entity using C#

    - by Eddie
    Basically a dup of this question using php, but I need it for C#. I need to be able to replace any & that is not currently not any HTML entity (e.g. &amp;) before outputting to screen. I was thinking a regex, but I'm not sure if .Net has something built in that will do this.

    Read the article

  • Object of type "X" cannot be converted to object of type "X"

    - by Benjol
    (Can't believe this hasn't already been asked, but I can't find a dup) In Visual Studio with lots of projects, when I first open the solution, I sometimes get the warning Object of type "X" cannot be converted to object of type "X". Generally rebuilding seems to make it go away, but does anyone know what this is caused by, and how to avoid it? UPDATE I read somewhere that deleting all your resx files and rebuilding can help. I unthinkingly tried this. Not a good idea...

    Read the article

  • Add console.profile statements to JavaScript/jQuery code on the fly.

    - by novogeek
    Hi folks, We have a thick client app using jQuery heavily and want to profile the performance of the code using firebug's console.profile API. The problem is, I don't want to change the code to write the profile statements. Take this example: var search=function(){ this.init=function(){ console.log('init'); } this.ajax=function(){ console.log('ajax'); //make ajax call using $.ajax and do some DOM manipulations here.. } this.cache=function(){ console.log('cache'); } } var instance=new search(); instance.ajax(); I want to profile my instance.ajax method, but I dont want to add profile statements in the code, as that makes it difficult to maintain the code. I'm trying to override the methods using closures, like this: http://www.novogeek.com/post/2010/02/27/Overriding-jQueryJavaScript-functions-using-closures.aspx but am not very sure how I can achieve. Any pointers on this? I think this would help many big projects to profile the code easily without a big change in code. Here is the idea. Just run the below code in firebug console, to know what I'm trying to achieve. var search=function(){ this.init=function(){ console.log('init'); } this.ajax=function(){ console.log('ajax'); //make ajax call using $.ajax and do some DOM manipulations here.. } this.cache=function(){ console.log('cache'); } } var instance=new search(); $.each(instance, function(functionName, functionBody){ (function(){ var dup=functionBody functionBody=function(){ console.log('modifying the old function: ',functionName); console.profile(functionName); dup.apply(this,arguments); console.profileEnd(functionName); } })(); console.log(functionName, '::', functionBody()); }); Now what I need is, if i say instance.ajax(), I want the new ajax() method to be called, along with the console.profile statements. Hope I'm clear with the requirement. Please improvise the above code. Regards, Krishna, http://www.novogeek.com

    Read the article

  • Including a C header which declares a variable called "new"?

    - by StackedCrooked
    I'm trying to use the OpenCA library in a C++ application. However, when including the file pki_x509_data_st.h the following code fragment is encountered: typedef struct pki_x509_callbacks_st { /* ---------------- Memory Management -------------------- */ void * (*new) (void ); void (*free) (void *x ); void * (*dup) (void *x ); This won't compile because of the "new" pointer declaration. How can I make it work?

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • 503 Service Unavailable - What really it means?

    - by pandiya chendur
    Possible Dup: http://stackoverflow.com/questions/2529244/503-service-unavailable-what-really-it-means I am asking on behalf of original question poster because we both work in the same place... I developed a website and it loads in every other system but certainly not in mine ... WHen i used firebug my request show 503 Service Unavailable Firebug response header showed, Server squid/2.6.STABLE21 Date Sat, 27 Mar 2010 12:25:18 GMT Content-Type text/html Content-Length 1163 Expires Sat, 27 Mar 2010 12:25:18 GMT X-Squid-Error ERR_DNS_FAIL 0 X-Cache MISS from xavy X-Cache-Lookup MISS from xavy:3128 Via 1.0 xavy:3128 (squid/2.6.STABLE21) Proxy-Connection close For REF: please visit the original question and look at the answers and comments and help us out..

    Read the article

  • Recover OffScreen Window in Windows 8

    - by Jason McD
    Someone asked a very similar question in regards to window recovery for windows xp: Similar Recovery I'm using windows 8 (or 8.1 now). I run Dual Monitors. On Monitor 2 I sometimes have the input switched to something else other than the windows display (Mac or PS3). If there was a program that was displaying on Window #2 that does not have 'move' where I could potentially alt-space-m arrow keys. Is there another way to get that program to display on Window #1? I tried right click on the program and doesn't seem to be anything there. I tried Windows Key and Arrows. I Tried right click toolbar and cascade. I'm hoping this is a DUP or Softball question because this annoys the crap out of me. Thanks!!

    Read the article

  • Which Linux book for aspiring sysadmin?

    - by Ramy
    I have a co-worker who insists that he will never buy a book unless it is considered "THE" book. So, in this vein, I thought I'd ask what the ultimate Linux book is. I wouldn't quite call myself a complete beginner since I can get around in Linux in general pretty well. But, beyond that, I'm also looking for a book with an eye towards becoming a Sys Admin someday. I saw a Junior Sys Admin position open up recently but with the requisite 2-3 years experience, I may have to wait a little while longer before I'm ready to apply for such a position. Having said all that, I'll summarize my question: What is the ultimate Linux book for someone who is ok with the basic tasks of getting around in Linux but also wants to aim towards full Sys Admin status someday? A few examples of the books I'm considering: Linux-Administration-Beginners-Guide-Fifth Linux-System-Administration Linux-System-Administration EDIT: Before you close this question as a dup, I'd like to say that I'm looking for something that goes deeper than this: Book for linux newbies I already have "Linux in a nutshell"

    Read the article

  • How to give a Linux user permission to create backups, but not permission to delete them?

    - by ChocoDeveloper
    I want to set up automated backups that are kept safe from myself (in case a virus pwns me). The problem is the "create" and "delete" permissions are the same thing: write permission. So what can I do about it? Is it possible to decouple the create/delete permissions? Another option could be to let the user "root" make the backups. The problem is my home directory is encrypted, and I don't want to backup everything. Any ideas? For the backups I'm using Deja Dup, which is installed by default in Fedora and Ubuntu.

    Read the article

  • Twitter gem - undefined method `stringify_keys’

    - by Piet
    Have you been getting the following errors when running the Twitter gem lately ? /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `send': undefined method `stringify_keys' for # (NoMethodError) from /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `method_missing’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:131:in `deep_update’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:50:in `initialize’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `new’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `fetch’ from test.rb:26 It’s because Twitter has been sending back plain text errors that are treated as a string instead of json and can’t be properly ‘Mashed’ by the Twitter gem. Also check http://github.com/jnunemaker/twitter/issues#issue/6. Without diving into the bowels of the Twitter gem or HTTParty, you could ‘begin…rescue’ this error and try again in 5 minutes. I fixed it by overriding the offending code to return nil and checking for a nil response as follows: module Twitter class Search def fetch(force=false) if @fetch.nil? || force query = @query.dup query[:q] = query[:q].join(' ') query[:format] = 'json' #This line is the hack and whole reason we're monkey-patching at all. response = self.class.get('http://search.twitter.com/search', :query => query, :format => :json) #Our patch: response should be a Hash. If it isnt, return nil. return nil if response.class != Hash @fetch = Mash.new(response) end @fetch end end end (adapted from http://github.com/jnunemaker/twitter/issues#issue/9) If you have a better solution: speak up!

    Read the article

  • Why is the root partition on my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

  • Why nautilus quicklist is not working?

    - by jasmines
    None of my bookmarks (Documents, Pictures, Download, Dropbox, Ubuntu One, Music, Public) are correctly shown but they won't open if I right click on the Home icon and select them. The only ones who work are Home and Open A New Window. I've read similar questions (http://askubuntu.com/questions/184504/unity-home-quicklist-not-working-when-nautilus-is-closed and unity home quicklist not working) but my problem seems different... Anyway I can't solve with the suggested workarounds. $ ls ~ Audiobooks Dropbox Modelli Pubblici Video Backup dvdrip-data Musica Scaricati VirtualBox VMs deja-dup grive Pictures - GT-I9100 Scrivania virtual-drives Documenti Immagini Podcasts Ubuntu One Vuze Downloads $ cat /usr/share/applications/nautilus.desktop [Desktop Entry] Name=Files Comment=Access and organize files Exec=nautilus %U Icon=system-file-manager Terminal=false Type=Application StartupNotify=true OnlyShowIn=GNOME;Unity; Categories=GNOME;GTK;Utility;Core; MimeType=inode/directory;application/x-gnome-saved-search; X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=nautilus X-GNOME-Bugzilla-Component=general X-GNOME-Bugzilla-Version=3.4.2 Actions=Window; X-Ubuntu-Gettext-Domain=nautilus [Desktop Action Window] Name=Open a New Window Exec=nautilus OnlyShowIn=Unity;

    Read the article

  • Why is my disk full?

    - by Agmenor
    I installed Ubuntu 12.04 by doing a fresh install where there was previously Ubuntu 11.10. My computer warns me now that my disk is nearly full. After having run apt-get purge, run apt-get autoremove and emptied the Trash can, I still have this problem as shown by this screenshot of Gparted: The disk /dev/sda7 is indeed full. I ran the Disk Usage Analyzer (Baobab) and I am still not sure of what is happening: One of my hypothesis is that when installing Ubuntu 12.04, I didn't configure my disks well and the disk /dev/sda6 is not mounted well as /home. Is this the reason indeed? What should I do to verify this and then to get the things fixed? Here are a few additional details to answer the questions I received (thank you everybody): My home directory is not encrypted. The Backup utility (Déjà Dup) is not set for automatic backups. (I do it myself and manually.) After I mount /dev/sda6, the command df -h gives Filesystem Size Used Avail Use% Mounted on /dev/sda7 244G 221G 12G 96% / udev 3,9G 4,0K 3,9G 1% /dev tmpfs 1,6G 904K 1,6G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,9G 164K 3,9G 1% /run/shm /dev/sda6 653G 189G 433G 31% /media/8ec2fa69-039b-4c52-ab1b-034d785132a1 (sorry but formatting this into code does not work, for an unknown reason) Thanks to izx's post, I realized /dev/sda6 was not even mounted before. It contains all the documents I used to have when I was running Ubuntu 11.10.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >