Search Results

Search found 453 results on 19 pages for 'threshold'.

Page 5/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • What is the best way to generate income from mobile games?

    - by Thomas
    As the title states, what is the best way to get income from mobile games? (taking into consideration that creating the games only costs a lot of time and the games are relatively simple) As I see it, there are multiple ways of getting money from mobile games, Selling them for a fixed price (seems like a high threshold for potential buyers) In-game purchases (I can imagine this only works for several types of games, I don't see this working well for monopoly unless you like really fancy hotels ;) Ingame advertisements / sponsorships Which way will most likely bring the most profit?

    Read the article

  • Senior Developers vs. Junior

    - by huwyss
    I like the following quote which I found on codinghorror:[As Steve points out this is one key difference between junior and senior developers:] In the old days, seeing too much code at once quite frankly exceeded my complexity threshold, and when I had to work with it I'd typically try to rewrite it or at least comment it heavily. Today, however, I just slog through it without complaining (much). When I have a specific goal in mind and a complicated piece of code to write, I spend my time making it happen rather than telling myself stories about it [in comments].

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • No Significant Fragmentation? Look Closer…

    If you are relying on using 'best-practice' percentage-based thresholds when you are creating an index maintenance plan for a SQL Server that checks the fragmentation in your pages, you may miss occasional 'edge' conditions on larger tables that will cause severe degradation in performance. It is worth being aware of patterns of data access in particular tables when judging the best threshold figure to use.

    Read the article

  • Preventing Problems in SQL Server

    It is never a good idea to let your users be the ones to tell you of database server outages. It is far better to be able to spot potential problems by being alerted for the most relevant conditions on your servers at the best threshold. This will take time and patience, but the reward will be an alerting system which allows you to deal more effectively with issues before they involve system down-time

    Read the article

  • Conversion Optimization Part 1

    The world of search engine optimization is such that whoever enters the threshold of this world, enhances the volume and quality of traffic to their web page or web site. Unlike other forms of search engine marketing that primarily deal with paid inclusions; search engine optimization gives 100 percent organic (un-paid) search results.

    Read the article

  • Windows 7 XP Mode disable time sync

    - by Oskar Duveborn
    So I've tried the trick from Virtual PC 2007, adding the following section to the vmc configuration file: <components> <host_time_sync> <enabled type="boolean">false</enabled> </host_time_sync> </components> Later someone suggested VPC doesn't want the components level so added this instead: <host_time_sync> <enabled type="boolean">false</enabled> <frequency type="integer">15</frequency> <threshold type="integer">10</threshold> </host_time_sync> When I start up XP Mode (Microsoft Virtual PC) it completely ignores any of these two configuration changes and if I change the clock it's instantly reset to the host time again. I've also obviously disabled the Windows Time service but as it's not joined to a domain or set up with a source it shouldn't be involved anyway. I need to test an application over a few midnight passes and thought the XP Mode machine would be perfect, so I didn't have to mess with my workstation clock... is there any way to get the VPC guest to not sync time with the host? This is easy in Hyper-V ;p

    Read the article

  • Improving abysmal 802.11n wireless network

    - by concept
    I am in desperate need of help to improve the abysmal performance of my 802.11n wireless network. At best I get 30Mbs (this is an internet download) from a technology that boasts 300Mbs, even worse is the LAN where to date best i have ever gotten is 1Mbs. It is literally quicker to copy the file to a USB and walk it to the other computer. Infrastructure is this AP 802.11n only broadcasting at both 2.4GHz and 5GHz Mac with 802.11a/b/g/n card is connected to the AP via 5GHz Linux with 802.11a/b/g/n card is connected to AP via 2.4GHz I have conducted the following tests (results at end of post) Internet based speed test wired and wireless LAN file copy wired and wireless I have read: http://nutsaboutnets.com/troubleshooting-wi-fi-problems/ http://www.smallnetbuilder.com/wireless/wireless-basics/30664-5-ways-to-fix-slow-80211n-- speed http colon //www.wi-fiplanet dot com/tutorials/7-tips-to-increase-wi-fi-performance.html Slow file transfer on network between two 802.11n laptops (connected directly together via access point) Wireless Network Performance Issues Slower than expected 802.11n wireless network speeds I have made the following optimizations AP broadcasts only 802.11n on both 2.4GHz and 5GHz frequencies 2.4GHz is on a channel with least interference (live in an apartment with lots of APs), this did make a 10Mb/sec improvement Our AP is the only one transmitting on the 5GHz freq. Security: WPA Personal WPA2 AES encryption Bandwidth: 20MHz / 40MHz (i assume this to be channel bonding) I have tried the following with 0 improvement Dropped the Fragment Threshold to 512 Dropped the Request To Send (RTS) Threshold to 512 and 1 Even thought of buying a frequency spectrum analyzer, until i saw the cost of them!!! Speed test results Linux Wired: DOWNLOAD 128.40Mb/s UPLOAD 10.62Mb/s www dot speedtest dot net/my-result/2948381853 Mac Wired: DOWNLOAD 118.02Mb/s UPLOAD 10.56Mb/s www dot speedtest dot net/my-result/2948384406 Linux Wireless: DOWNLOAD 23.99Mb/s UPLOAD 10.31Mb/s www.speedtest dot net/my-result/2948394990 Mac Wireless: DOWNLOAD 22.55Mb/s UPLOAD 10.36Mb/s www.speedtest dot net/my-result/2948396489 LAN NFS 53,345,087 bytes (51Mb) file Linux Mac NFS Wired: 65.6959 Mb/sec Linux Mac NFS Wireless: .9443 Mb/sec All help is appreciated, even testing methods will be accepted.

    Read the article

  • Keyboard shortcut for moving a window to another screen

    - by wcoenen
    When working with two (or more screens), a common problem is that launched applications appear on the "wrong" screen. I especially find this annoying when launching a text editor from the command line, because I have to leave the home row with my right hand in order to drag the window to the "right" screen before I can continue typing. Is it possible to define a keyboard shortcut which moves the current application to the other/next screen? Edit: I'm using Windows XP, but it's good to know that the feature already exists in Windows 7. Edit2: I went for the autohotkey script. This adaptation works for me: #q:: WinGetPos, winx, winy,,, A WinGet, mm, MinMax, A WinRestore, A If (winx > 1270) { newx := winx-1270 OutputDebug, Moving left from %winx% to %newx% } else { newx := winx+1270 OutputDebug, Moving right from %winx% to %newx% } WinMove, A,, newx, winy if mm=1 WinMaximize, A Return I did have to make use of the OutputDebug statements and dbgview to discover the proper threshold value 1270 for moving left or right. The exact threshold is especially important when moving maximized windows to the left.

    Read the article

  • Matlab code works with one version but not the other

    - by user1325655
    I have a code that works in Matlab version R2010a but shows errors in matlab R2008a. I am trying to implement a self organizing fuzzy neural network with extended kalman filter. I have the code running but it only works in matlab version R2010a. It doesn't work with other versions. Any help? Code attach function [ c, sigma , W_output ] = SOFNN( X, d, Kd ) %SOFNN Self-Organizing Fuzzy Neural Networks %Input Parameters % X(r,n) - rth traning data from nth observation % d(n) - the desired output of the network (must be a row vector) % Kd(r) - predefined distance threshold for the rth input %Output Parameters % c(IndexInputVariable,IndexNeuron) % sigma(IndexInputVariable,IndexNeuron) % W_output is a vector %Setting up Parameters for SOFNN SigmaZero=4; delta=0.12; threshold=0.1354; k_sigma=1.12; %For more accurate results uncomment the following %format long; %Implementation of a SOFNN model [size_R,size_N]=size(X); %size_R - the number of input variables c=[]; sigma=[]; W_output=[]; u=0; % the number of neurons in the structure Q=[]; O=[]; Psi=[]; for n=1:size_N x=X(:,n); if u==0 % No neuron in the structure? c=x; sigma=SigmaZero*ones(size_R,1); u=1; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else [Q,O,pT_n] = UpdateStructureRecursively(X,Psi,Q,O,d,n); end; KeepSpinning=true; while KeepSpinning %Calculate the error and if-part criteria ae=abs(d(n)-pT_n*O); %approximation error [phi,~]=GetMePhi(x,c,sigma); [maxphi,maxindex]=max(phi); % maxindex refers to the neuron's index if ae>delta if maxphi<threshold %enlarge width [minsigma,minindex]=min(sigma(:,maxindex)); sigma(minindex,maxindex)=k_sigma*minsigma; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else %Add a new neuron and update structure ctemp=[]; sigmatemp=[]; dist=0; for r=1:size_R dist=abs(x(r)-c(r,1)); distIndex=1; for j=2:u if abs(x(r)-c(r,j))<dist distIndex=j; dist=abs(x(r)-c(r,j)); end; end; if dist<=Kd(r) ctemp=[ctemp; c(r,distIndex)]; sigmatemp=[sigmatemp ; sigma(r,distIndex)]; else ctemp=[ctemp; x(r)]; sigmatemp=[sigmatemp ; dist]; end; end; c=[c ctemp]; sigma=[sigma sigmatemp]; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); KeepSpinning=false; u=u+1; end; else if maxphi<threshold %enlarge width [minsigma,minindex]=min(sigma(:,maxindex)); sigma(minindex,maxindex)=k_sigma*minsigma; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else %Do nothing and exit the while KeepSpinning=false; end; end; end; end; W_output=O; end function [Q_next, O_next,pT_n] = UpdateStructureRecursively(X,Psi,Q,O,d,n) %O=O(t-1) O_next=O(t) p_n=GetMeGreatPsi(X(:,n),Psi(n,:)); pT_n=p_n'; ee=abs(d(n)-pT_n*O); %|e(t)| temp=1+pT_n*Q*p_n; ae=abs(ee/temp); if ee>=ae L=Q*p_n*(temp)^(-1); Q_next=(eye(length(Q))-L*pT_n)*Q; O_next=O + L*ee; else Q_next=eye(length(Q))*Q; O_next=O; end; end function [ Q , O ] = UpdateStructure(X,Psi,d) GreatPsiBig = GetMeGreatPsi(X,Psi); %M=u*(r+1) %n - the number of observations [M,~]=size(GreatPsiBig); %Others Ways of getting Q=[P^T(t)*P(t)]^-1 %************************************************************************** %opts.SYM = true; %Q = linsolve(GreatPsiBig*GreatPsiBig',eye(M),opts); % %Q = inv(GreatPsiBig*GreatPsiBig'); %Q = pinv(GreatPsiBig*GreatPsiBig'); %************************************************************************** Y=GreatPsiBig\eye(M); Q=GreatPsiBig'\Y; O=Q*GreatPsiBig*d'; end %This function works too with x % (X=X and Psi is a Matrix) - Gets you the whole GreatPsi % (X=x and Psi is the row related to x) - Gets you just the column related with the observation function [GreatPsi] = GetMeGreatPsi(X,Psi) %Psi - In a row you go through the neurons and in a column you go through number of %observations **** Psi(#obs,IndexNeuron) **** GreatPsi=[]; [N,U]=size(Psi); for n=1:N x=X(:,n); GreatPsiCol=[]; for u=1:U GreatPsiCol=[ GreatPsiCol ; Psi(n,u)*[1; x] ]; end; GreatPsi=[GreatPsi GreatPsiCol]; end; end function [phi, SumPhi]=GetMePhi(x,c,sigma) [r,u]=size(c); %u - the number of neurons in the structure %r - the number of input variables phi=[]; SumPhi=0; for j=1:u % moving through the neurons S=0; for i=1:r % moving through the input variables S = S + ((x(i) - c(i,j))^2) / (2*sigma(i,j)^2); end; phi = [phi exp(-S)]; SumPhi = SumPhi + phi(j); %phi(u)=exp(-S) end; end %This function works too with x, it will give you the row related to x function [Psi] = GetMePsi(X,c,sigma) [~,u]=size(c); [~,size_N]=size(X); %u - the number of neurons in the structure %size_N - the number of observations Psi=[]; for n=1:size_N [phi, SumPhi]=GetMePhi(X(:,n),c,sigma); PsiTemp=[]; for j=1:u %PsiTemp is a row vector ex: [1 2 3] PsiTemp(j)=phi(j)/SumPhi; end; Psi=[Psi; PsiTemp]; %Psi - In a row you go through the neurons and in a column you go through number of %observations **** Psi(#obs,IndexNeuron) **** end; end

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Saturday, June 29, 2013

    CodePlex Daily Summary for Saturday, June 29, 2013Popular ReleasesAscend 3D: Ascend 2.0.1: Moved model loading into SceneNode.Load static method Updated AscendViewer to use latest Ascend buildUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????SQL Server Data Compare: DB Compare 0.1 Beta 1: Some bugs fixed. Do not forget to add reviews. :)Hogeschool Rotterdam Windows Phone Maps project: HRO Maps Sourcecode VS2010: Initiele versieUniversal Visualnovel Engine Tools: ns2uve: NS2UVE ONS???????? ????:.Net Framework 4.0 ????:UVE for WP8 1.2?? ??:update 2????????! ????1.?ONS???????????,???????nscript.dat?Icon.png??,???arc.nsa?default.ttf??。 2.??????,????bin??????exe?????dll???????????????。 3.??ns2uve.exe,??????,?????,????????? 4.?????????????.png??? ??:??????????nscript.dat??Icon.png?,??????。?????????????。 ????????????src???? CopyRight W-Otaku DEVAdjusting SharePoint Site Quota PowerShell: Adjusting.SharePoint.Site.Quota: Version 1.0 Features Display Database Size Display Quota Warning Threshold Display Quota Maximum Threshold Display Site Space Usage Change Quota Warning Threshold Change Quota Maximum ThresholdOutlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...New ProjectsA sample web app for AppHarbor: Just a little project to test the amazing apphorbor offering!Android_Traffic_Tracker: Android traffic trackingEASTester: EASTester This application shows how encoding, decoding and submission of Exchange Server ActiveSync (EAS) calls might be done. Everynet_TFS_SVN: this is a everynet projectFluentRoute: Make the task of configure ASP.NET MVC Routes much more easier! This lib gives you the possibility of using Fluent Configurafion,style.Google Music for Jamcast: This project adds Google Music browse and playback capabilities to Jamcast, a DLNA media server for Windows.GussanoExtension: My summaryHL7 SDK - Open Source CDA R2 Implemenation for .NET and COM: A set of open source libraries for creating, parsing, storing and converting HL7 Clinical Documents in .NET and COM environment.Hogeschool Rotterdam Windows Phone Maps project: Dit is een project gemaakt voor het vak INFPRJ07DT voor de Hogeschool Rotterdam. Hue For Both (Build 2013): A simple MVVM project for controlling Philips Hue lights on Windows 8 and Windows Phone 8.Key2Screen: This little helper will show all keystrokes on screen. This will be needed during a Kata to show the audience the uses keyboard shortcuts or to record them.MercerGOLD: A tool for managing information on worldwide employee benefits, compensation and human resource programs.Microsoft CRM 2011 True Unique Autonumber Creator: The Microsoft CRM 2011 True Unique Autonumber Creator provides functionality for generating unique numbers for any entity. Work for On-Premises and Online/CloudNewsAlerts: this is news ALERT PROJECTNTmdb: A wrapper for the TMDb API written in C# .Net 4.5.PowerShellCron: Windows Service to Schedule and Run PowerShell Scripts with Full Logging to Database of script output (all streams, including Write-Host).QlikView Extension - WebPageViewer2: QlikView Extension to display a web page in QlikView.Restafari - The REST Client Base: A REST Client base for your .Net projects. It is compatible with: - .Net 4.5 - .Net 4.0 - Windows Phone 8 - Windows Store applicationsRevolution Of SnowWhite: PC??????SharePoint Silverlight CSV Importer: Convert .csv data into SharePoint list items. A slick Silverlight control to map and import a .csv files to a sharepoint list. Includes transforms and keys.Silverlight AWS S3 Uploader: Silverlight 5 app to upload files to AWS S3SIM Card Manager: A Windows tool to read SIM card information and contentSoCafeShop: SoCafeShopTeam Foundation Server 2012 Sample Work Items for MSF Agile, CMMI, & SCRUM: This project provides sample work items that can be used in your MSF Agile, MSF CMMI, or SCRUM 2.0 Process Templaces in Team Foundation Server 2012.TidyVaca - an SF inspired restyle of the Tidy responsive Skin by Adammer: A San Francisco inspired restyling of Adammer's Tidy responsive skin.Tiny Forms Controls: The goal of this project is to create a library of Windows Forms and Web Forms controls and components.TMYS: Deneme projesi önemli bir sey degil

    Read the article

  • Help with dynamic range compression function (audio)

    - by MusiGenesis
    I am writing a C# function for doing dynamic range compression (an audio effect that basically squashes transient peaks and amplifies everything else to produce an overall louder sound). I have written a function that does this (I think): public static void Compress(ref short[] input, double thresholdDb, double ratio) { double maxDb = thresholdDb - (thresholdDb / ratio); double maxGain = Math.Pow(10, -maxDb / 20.0); for (int i = 0; i < input.Length; i += 2) { // convert sample values to ABS gain and store original signs int signL = input[i] < 0 ? -1 : 1; double valL = (double)input[i] / 32768.0; if (valL < 0.0) { valL = -valL; } int signR = input[i + 1] < 0 ? -1 : 1; double valR = (double)input[i + 1] / 32768.0; if (valR < 0.0) { valR = -valR; } // calculate mono value and compress double val = (valL + valR) * 0.5; double posDb = -Math.Log10(val) * 20.0; if (posDb < thresholdDb) { posDb = thresholdDb - ((thresholdDb - posDb) / ratio); } // measure L and R sample values relative to mono value double multL = valL / val; double multR = valR / val; // convert compressed db value to gain and amplify val = Math.Pow(10, -posDb / 20.0); val = val / maxGain; // re-calculate L and R gain values relative to compressed/amplified // mono value valL = val * multL; valR = val * multR; double lim = 1.5; // determined by experimentation, with the goal // being that the lines below should never (or rarely) be hit if (valL > lim) { valL = lim; } if (valR > lim) { valR = lim; } double maxval = 32000.0 / lim; // convert gain values back to sample values input[i] = (short)(valL * maxval); input[i] *= (short)signL; input[i + 1] = (short)(valR * maxval); input[i + 1] *= (short)signR; } } and I am calling it with threshold values between 10.0 db and 30.0 db and ratios between 1.5 and 4.0. This function definitely produces a louder overall sound, but with an unacceptable level of distortion, even at low threshold values and low ratios. Can anybody see anything wrong with this function? Am I handling the stereo aspect correctly (the function assumes stereo input)? As I (dimly) understand things, I don't want to compress the two channels separately, so my code is attempting to compress a "virtual" mono sample value and then apply the same degree of compression to the L and R sample value separately. Not sure I'm doing it right, however. I think part of the problem may the "hard knee" of my function, which kicks in the compression abruptly when the threshold is crossed. I think I may need to use a "soft knee" like this: Can anybody suggest a modification to my function to produce the soft knee curve?

    Read the article

  • Test whether pixel is inside the blobs for ofxOpenCV

    - by mia
    I am doing an application of the concept of the dodgeball and need to test of the pixel of the ball is in the blobs capture(which is the image of the player) I am stucked and ran out of idea of how to implement it. I manage to do a little progress which have the blobs but I not sure how to test it. Please help. I am a newbie who in a desperate condition. Thank you. This is some of my code. void testApp::setup(){ #ifdef _USE_LIVE_VIDEO vidGrabber.setVerbose(true); vidGrabber.initGrabber(widthS,heightS); #else vidPlayer.loadMovie("fingers.mov"); vidPlayer.play(); #endif widthS = 320; heightS = 240; colorImg.allocate(widthS,heightS); grayImage.allocate(widthS,heightS); grayBg.allocate(widthS,heightS); grayDiff.allocate(widthS,heightS); ////<---what I want bLearnBakground = true; threshold = 80; //////////circle////////////// counter = 0; radius = 0; circlePosX = 100; circlePosY=200; } void testApp::update(){ ofBackground(100,100,100); bool bNewFrame = false; #ifdef _USE_LIVE_VIDEO vidGrabber.grabFrame(); bNewFrame = vidGrabber.isFrameNew(); #else vidPlayer.idleMovie(); bNewFrame = vidPlayer.isFrameNew(); #endif if (bNewFrame){ if (bLearnBakground == true){ grayBg = grayImage; // the = sign copys the pixels from grayImage into grayBg (operator overloading) bLearnBakground = false; } #ifdef _USE_LIVE_VIDEO colorImg.setFromPixels(vidGrabber.getPixels(),widthS,heightS); #else colorImg.setFromPixels(vidPlayer.getPixels(),widthS,heightS); #endif grayImage = colorImg; grayDiff.absDiff(grayBg, grayImage); grayDiff.threshold(threshold); contourFinder.findContours(grayDiff, 20, (340*240)/3, 10, true); // find holes } ////////////circle//////////////////// counter = counter + 0.05f; if(radius>=50){ circlePosX = ofRandom(10,300); circlePosY = ofRandom(10,230); } radius = 5 + 3*(counter); } void testApp::draw(){ // draw the incoming, the grayscale, the bg and the thresholded difference ofSetColor(0xffffff); //white colour grayDiff.draw(10,10);// draw start from point (0,0); // we could draw the whole contour finder // or, instead we can draw each blob individually, // this is how to get access to them: for (int i = 0; i < contourFinder.nBlobs; i++){ contourFinder.blobs[i].draw(10,10); } ///////////////circle////////////////////////// //let's draw a circle: ofSetColor(0,0,255); char buffer[255]; float a = radius; sprintf(buffer,"radius = %i",a); ofDrawBitmapString(buffer, 120, 300); if(radius>=50) { ofSetColor(255,255,255); counter = 0; } else{ ofSetColor(255,0,0); } ofFill(); ofCircle(circlePosX,circlePosY,radius); }

    Read the article

  • How to determine edges in an images optimally?

    - by SorinA.
    I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this: the result should be an image with the msn content without the white margins(left& right). I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect = x1,y1,x2,y2 data. I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar.. Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'? public static CropRect EdgeDetection(Bitmap Image, float Threshold) { CropRect cropRectangle = new CropRect(); int lowestX = 0; int lowestY = 0; int largestX = 0; int largestY = 0; lowestX = Image.Width; lowestY = Image.Height; //find the lowest X bound; for (int y = 0; y < Image.Height - 1; ++y) { for (int x = 0; x < Image.Width - 1; ++x) { Color currentColor = Image.GetPixel(x, y); Color tempXcolor = Image.GetPixel(x + 1, y); Color tempYColor = Image.GetPixel(x, y + 1); if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) + ((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) + ((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold)) { if (lowestX > x) lowestX = x; if (largestX < x) largestX = x; } if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) + ((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) + ((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold)) { if (lowestY > y) lowestY = y; if (largestY < y) largestY = y; } } } if (lowestX < Image.Width / 4) cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0; else cropRectangle.X = 0; if (lowestY < Image.Height / 4) cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0; else cropRectangle.Y = 0; cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8; cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8; return cropRectangle; } }

    Read the article

  • How to determine edges in an image optimally?

    - by SorinA.
    I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this: the result should be an image with the msn content without the white margins(left& right). I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect = x1,y1,x2,y2 data. I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar.. Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'? public static CropRect EdgeDetection(Bitmap Image, float Threshold) { CropRect cropRectangle = new CropRect(); int lowestX = 0; int lowestY = 0; int largestX = 0; int largestY = 0; lowestX = Image.Width; lowestY = Image.Height; //find the lowest X bound; for (int y = 0; y < Image.Height - 1; ++y) { for (int x = 0; x < Image.Width - 1; ++x) { Color currentColor = Image.GetPixel(x, y); Color tempXcolor = Image.GetPixel(x + 1, y); Color tempYColor = Image.GetPixel(x, y + 1); if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) + ((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) + ((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold)) { if (lowestX > x) lowestX = x; if (largestX < x) largestX = x; } if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) + ((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) + ((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold)) { if (lowestY > y) lowestY = y; if (largestY < y) largestY = y; } } } if (lowestX < Image.Width / 4) cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0; else cropRectangle.X = 0; if (lowestY < Image.Height / 4) cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0; else cropRectangle.Y = 0; cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8; cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8; return cropRectangle; } }

    Read the article

  • output image is not displayed

    - by gerry chocolatos
    so, im doing an image detection which i have to process on each red, green, n blue element to get the edge map and combine them become one to show the output. but it doesnt show my output image would anyone pls be kind enough to help me? here is my code so far. //get the red element process_red = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr = buff_red.getRGB(j, i); int red = (clr & 0x00ff0000) >> 16; red = (0xFF<<24)|(red<<16)|(red<<8)|red; process_red[counter] = red; counter++; } } //set threshold value for red element int threshold = 100; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { int bin = (buff_red.getRGB(x, y) & 0x000000ff); if (bin < threshold) bin = 0; else bin = 255; buff_red.setRGB(x,y, 0xff000000 | bin << 16 | bin << 8 | bin); } } and i do the same way for my green n blue elements. and then i wanted to get to combination of the three by doing it this way: //combine the three elements process_combine = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr_a = buff_red.getRGB(j, i); int ar = clr_a & 0x000000ff; int clr_b = buff_green.getRGB(j, i); int bg = clr_b & 0x000000ff; int clr_c = buff_blue.getRGB(j, i); int cb = clr_b & 0x000000ff; int alpha = 0xff000000; int combine = alpha|(ar<<16)|(bg<<8)|cb; process_combine[counter] = combine; counter++; } } buff_rgb = new BufferedImage(width,height, BufferedImage.TYPE_INT_ARGB); Graphics rgb; rgb = buff_rgb.getGraphics(); rgb.drawImage(output_rgb, 0, 0, null); rgb.dispose(); repaint(); and to show the output whic is from the combining process, i use a draw method: g.drawImage(buff_rgb,800,100,this); but still it doesnt show the image. can anyone pls help me? ur help is really appreciated. thanks.

    Read the article

  • How can I change mouse keymapping

    - by zuberuber
    I have Razer DeathAdder(left handed edition) and A4Tech wireless mouse. My problem is I don't know how to change wireless mouse keymapping(swaping left/right click). Can somebody guide me how to do such thing? List of my devices: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Logitech Unifying Device. Wireless PID:4004 id=8 [slave pointer (2)] ? ? Razer Razer DeathAdder id=11 [slave pointer (2)] ? ? A4TECH USB Device id=12 [slave pointer (2)] ? ? A4TECH USB Device id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Logitech USB Keyboard id=9 [slave keyboard (3)] ? Logitech USB Keyboard id=10 [slave keyboard (3)] This is my Razer xinput: Device 'Razer Razer DeathAdder': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 5.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 5426, 22 Device Node (241): "/dev/input/event4" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0 And this is my wireless mouse xinput: Device 'A4TECH USB Device': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 1.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 2522, 1359 Device Node (241): "/dev/input/event16" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Horiz Wheel" (245), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • Cocoa equivalent of the Carbon method getPtrSize

    - by Michael Minerva
    I need to translate the a carbon method into cocoa into and I am having trouble finding any documentation about what the carbon method getPtrSize really does. From the code I am translating it seems that it returns the byte representation of an image but that doesn't really match up with the name. Could someone give me a good explanation of this method or link me to some documentation that describes it. The code I am translating is in a common lisp implementation called MCL that has a bridge to carbon (I am translating into CCL which is a common lisp implementation with a Cocoa bridge). Here is the MCL code (#_before a method call means that it is a carbon method): (defmethod COPY-CONTENT-INTO ((Source inflatable-icon) (Destination inflatable-icon)) ;; check for size compatibility to avoid disaster (unless (and (= (rows Source) (rows Destination)) (= (columns Source) (columns Destination)) (= (#_getPtrSize (image Source)) (#_getPtrSize (image Destination)))) (error "cannot copy content of source into destination inflatable icon: incompatible sizes")) ;; given that they are the same size only copy content (setf (is-upright Destination) (is-upright Source)) (setf (height Destination) (height Source)) (setf (dz Destination) (dz Source)) (setf (surfaces Destination) (surfaces Source)) (setf (distance Destination) (distance Source)) ;; arrays (noise-map Source) ;; accessor makes array if needed (noise-map Destination) ;; ;; accessor makes array if needed (dotimes (Row (rows Source)) (dotimes (Column (columns Source)) (setf (aref (noise-map Destination) Row Column) (aref (noise-map Source) Row Column)) (setf (aref (altitudes Destination) Row Column) (aref (altitudes Source) Row Column)))) (setf (connectors Destination) (mapcar #'copy-instance (connectors Source))) (setf (visible-alpha-threshold Destination) (visible-alpha-threshold Source)) ;; copy Image: slow byte copy (dotimes (I (#_getPtrSize (image Source))) (%put-byte (image Destination) (%get-byte (image Source) i) i)) ;; flat texture optimization: do not copy texture-id -> destination should get its own texture id from OpenGL (setf (is-flat Destination) (is-flat Source)) ;; do not compile flat textures: the display list overhead slows things down by about 2x (setf (auto-compile Destination) (not (is-flat Source))) ;; to make change visible we have to reset the compiled flag (setf (is-compiled Destination) nil))

    Read the article

  • Python PyQt Timer Firmata

    - by George Cullins
    Hello. I am pretty new to python and working with firmata I am trying to play around with an arduino . Here is what I want to happen: Set arduino up with an LED as a digital out Set potentiometer to analog 0 Set PyQt timer up to update potentiometer position in application Set a threshold in PyQt to turn LED on (Analog in has 1024bit resolution, so say 800 as the threshold) I am using this firmata library : Link Here is the code that I am having trouble with: import sys from PyQt4 import QtCore, QtGui from firmata import * # Arduino setup self.a = Arduino('COM3') self.a.pin_mode(13, firmata.OUTPUT) # Create timer self.appTimer = QtCore.QTimer(self) self.appTimer.start(100) self.appTimer.event(self.updateAppTimer()) def updateAppTimer(self): self.analogPosition = self.a.analog_read(self, 0) self.ui.lblPositionValue.setNum() I am getting the error message: Traceback (most recent call last): File "D:\Programming\Eclipse\IO Demo\src\control.py", line 138, in myapp = MainWindow() File "D:\Programming\Eclipse\IO Demo\src\control.py", line 56, in init self.appTimer.event(self.updateAppTimer()) File "D:\Programming\Eclipse\IO Demo\src\control.py", line 60, in updateAppTimer self.analogPosition = self.a.analog_read(self, 0) TypeError: analog_read() takes exactly 2 arguments (3 given) If I take 'self' out I get the same error message but that only 1 argument is given What is python doing implicitly that I am not aware of? Blockquote

    Read the article

  • open flash chart rails x-axis issue

    - by Jimmy
    Hey guys, I am using open flash chart 2 in my rails application. Everything is looking smooth except for the range on my x axis. I am creating a line to represent cell phone plan cost over a specific amount of usage and I'm generate 8 values, 1-5 are below the allowed usage while 6-8 are demonstrations of the cost for usage over the limit. The problem I'm encountering is how to set the range of the X axis in ruby on rails to something specific to the data. Right now the values being displayed are the indexes of the array that I'm giving. When I try to hand a hash to the values the chart doesn't even load at all. So basically I need help getting a way to set the data for my line properly so that it displays correctly, right now it is treating every value as if it represents the x value of the index of the array. Here is a screen shot which may be a better description than what I am saying: http://i163.photobucket.com/albums/t286/Xeno56/Screenshot.png Note that those values are correct just the range on the x-axis is incorrect, it should be something like 100, 200, 300, 400, 500, 600, 700 Code: y = YAxis.new y.set_range(0,100, 20) x_legend = XLegend.new("Usage") x_legend.set_style('{font-size: 20px; color: #778877}') y_legend = YLegend.new("Cost") y_legend.set_style('{font-size: 20px; color: #770077}') chart =OpenFlashChart.new chart.set_x_legend(x_legend) chart.set_y_legend(y_legend) chart.y_axis = y line = Line.new line.text = plan.name line.width = 2 line.color = '#006633' line.dot_size = 2 line.values = generate_data(plan) chart.add_element(line) def generate_data(plan) values = [] #generate below threshold numbers 5.times do |x| usage = plan.usage / 5 * x cost = plan.cost * 10 values << cost end #generate above threshold numbers 3.times do |x| usage = plan.usage + ((plan.usage / 5) * x) cost = plan.cost + (usage * plan.overage) values << cost end return values end

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >