Search Results

Search found 3701 results on 149 pages for 'cost threshold for parall'.

Page 29/149 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Keyboard shortcut for moving a window to another screen

    - by wcoenen
    When working with two (or more screens), a common problem is that launched applications appear on the "wrong" screen. I especially find this annoying when launching a text editor from the command line, because I have to leave the home row with my right hand in order to drag the window to the "right" screen before I can continue typing. Is it possible to define a keyboard shortcut which moves the current application to the other/next screen? Edit: I'm using Windows XP, but it's good to know that the feature already exists in Windows 7. Edit2: I went for the autohotkey script. This adaptation works for me: #q:: WinGetPos, winx, winy,,, A WinGet, mm, MinMax, A WinRestore, A If (winx > 1270) { newx := winx-1270 OutputDebug, Moving left from %winx% to %newx% } else { newx := winx+1270 OutputDebug, Moving right from %winx% to %newx% } WinMove, A,, newx, winy if mm=1 WinMaximize, A Return I did have to make use of the OutputDebug statements and dbgview to discover the proper threshold value 1270 for moving left or right. The exact threshold is especially important when moving maximized windows to the left.

    Read the article

  • Matlab code works with one version but not the other

    - by user1325655
    I have a code that works in Matlab version R2010a but shows errors in matlab R2008a. I am trying to implement a self organizing fuzzy neural network with extended kalman filter. I have the code running but it only works in matlab version R2010a. It doesn't work with other versions. Any help? Code attach function [ c, sigma , W_output ] = SOFNN( X, d, Kd ) %SOFNN Self-Organizing Fuzzy Neural Networks %Input Parameters % X(r,n) - rth traning data from nth observation % d(n) - the desired output of the network (must be a row vector) % Kd(r) - predefined distance threshold for the rth input %Output Parameters % c(IndexInputVariable,IndexNeuron) % sigma(IndexInputVariable,IndexNeuron) % W_output is a vector %Setting up Parameters for SOFNN SigmaZero=4; delta=0.12; threshold=0.1354; k_sigma=1.12; %For more accurate results uncomment the following %format long; %Implementation of a SOFNN model [size_R,size_N]=size(X); %size_R - the number of input variables c=[]; sigma=[]; W_output=[]; u=0; % the number of neurons in the structure Q=[]; O=[]; Psi=[]; for n=1:size_N x=X(:,n); if u==0 % No neuron in the structure? c=x; sigma=SigmaZero*ones(size_R,1); u=1; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else [Q,O,pT_n] = UpdateStructureRecursively(X,Psi,Q,O,d,n); end; KeepSpinning=true; while KeepSpinning %Calculate the error and if-part criteria ae=abs(d(n)-pT_n*O); %approximation error [phi,~]=GetMePhi(x,c,sigma); [maxphi,maxindex]=max(phi); % maxindex refers to the neuron's index if ae>delta if maxphi<threshold %enlarge width [minsigma,minindex]=min(sigma(:,maxindex)); sigma(minindex,maxindex)=k_sigma*minsigma; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else %Add a new neuron and update structure ctemp=[]; sigmatemp=[]; dist=0; for r=1:size_R dist=abs(x(r)-c(r,1)); distIndex=1; for j=2:u if abs(x(r)-c(r,j))<dist distIndex=j; dist=abs(x(r)-c(r,j)); end; end; if dist<=Kd(r) ctemp=[ctemp; c(r,distIndex)]; sigmatemp=[sigmatemp ; sigma(r,distIndex)]; else ctemp=[ctemp; x(r)]; sigmatemp=[sigmatemp ; dist]; end; end; c=[c ctemp]; sigma=[sigma sigmatemp]; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); KeepSpinning=false; u=u+1; end; else if maxphi<threshold %enlarge width [minsigma,minindex]=min(sigma(:,maxindex)); sigma(minindex,maxindex)=k_sigma*minsigma; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else %Do nothing and exit the while KeepSpinning=false; end; end; end; end; W_output=O; end function [Q_next, O_next,pT_n] = UpdateStructureRecursively(X,Psi,Q,O,d,n) %O=O(t-1) O_next=O(t) p_n=GetMeGreatPsi(X(:,n),Psi(n,:)); pT_n=p_n'; ee=abs(d(n)-pT_n*O); %|e(t)| temp=1+pT_n*Q*p_n; ae=abs(ee/temp); if ee>=ae L=Q*p_n*(temp)^(-1); Q_next=(eye(length(Q))-L*pT_n)*Q; O_next=O + L*ee; else Q_next=eye(length(Q))*Q; O_next=O; end; end function [ Q , O ] = UpdateStructure(X,Psi,d) GreatPsiBig = GetMeGreatPsi(X,Psi); %M=u*(r+1) %n - the number of observations [M,~]=size(GreatPsiBig); %Others Ways of getting Q=[P^T(t)*P(t)]^-1 %************************************************************************** %opts.SYM = true; %Q = linsolve(GreatPsiBig*GreatPsiBig',eye(M),opts); % %Q = inv(GreatPsiBig*GreatPsiBig'); %Q = pinv(GreatPsiBig*GreatPsiBig'); %************************************************************************** Y=GreatPsiBig\eye(M); Q=GreatPsiBig'\Y; O=Q*GreatPsiBig*d'; end %This function works too with x % (X=X and Psi is a Matrix) - Gets you the whole GreatPsi % (X=x and Psi is the row related to x) - Gets you just the column related with the observation function [GreatPsi] = GetMeGreatPsi(X,Psi) %Psi - In a row you go through the neurons and in a column you go through number of %observations **** Psi(#obs,IndexNeuron) **** GreatPsi=[]; [N,U]=size(Psi); for n=1:N x=X(:,n); GreatPsiCol=[]; for u=1:U GreatPsiCol=[ GreatPsiCol ; Psi(n,u)*[1; x] ]; end; GreatPsi=[GreatPsi GreatPsiCol]; end; end function [phi, SumPhi]=GetMePhi(x,c,sigma) [r,u]=size(c); %u - the number of neurons in the structure %r - the number of input variables phi=[]; SumPhi=0; for j=1:u % moving through the neurons S=0; for i=1:r % moving through the input variables S = S + ((x(i) - c(i,j))^2) / (2*sigma(i,j)^2); end; phi = [phi exp(-S)]; SumPhi = SumPhi + phi(j); %phi(u)=exp(-S) end; end %This function works too with x, it will give you the row related to x function [Psi] = GetMePsi(X,c,sigma) [~,u]=size(c); [~,size_N]=size(X); %u - the number of neurons in the structure %size_N - the number of observations Psi=[]; for n=1:size_N [phi, SumPhi]=GetMePhi(X(:,n),c,sigma); PsiTemp=[]; for j=1:u %PsiTemp is a row vector ex: [1 2 3] PsiTemp(j)=phi(j)/SumPhi; end; Psi=[Psi; PsiTemp]; %Psi - In a row you go through the neurons and in a column you go through number of %observations **** Psi(#obs,IndexNeuron) **** end; end

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • OWB 11gR2 &ndash; OLAP and Simba

    - by David Allan
    Oracle Warehouse Builder was the first ETL product to provide a single integrated and complete environment for managing enterprise data warehouse solutions that also incorporate multi-dimensional schemas. The OWB 11gR2 release provides Oracle OLAP 11g deployment for multi-dimensional models (in addition to support for prior releases of OLAP). This means users can easily utilize Simba's MDX Provider for Oracle OLAP (see here for details and cost) which allows you to use the powerful and popular ad hoc query and analysis capabilities of Microsoft Excel PivotTables® and PivotCharts® with your Oracle OLAP business intelligence data. The extensions to the dimensional modeling capabilities have been built on established relational concepts, with the option to seamlessly move from a relational deployment model to a multi-dimensional model at the click of a button. This now means that ETL designers can logically model a complete data warehouse solution using one single tool and control the physical implementation of a logical model at deployment time. As a result data warehouse projects that need to provide a multi-dimensional model as part of the overall solution can be designed and implemented faster and more efficiently. Wizards for dimensions and cubes let you quickly build dimensional models and realize either relationally or as an Oracle database OLAP implementation, both 10g and 11g formats are supported based on a configuration option. The wizard provides a good first cut definition and the objects can be further refined in the editor. Both wizards let you choose the implementation, to deploy to OLAP in the database select MOLAP: multidimensional storage. You will then be asked what levels and attributes are to be defined, by default the wizard creates a level bases hierarchy, parent child hierarchies can be defined in the editor. Once the dimension or cube has been designed there are special mapping operators that make it easy to load data into the objects, below we load a constant value for the total level and the other levels from a source table.   Again when the cube is defined using the wizard we can edit the cube and define a number of analytic calculations by using the 'generate calculated measures' option on the measures panel. This lets you very easily add a lot of rich analytic measures to your cube. For example one of the measures is the percentage difference from a year ago which we can see in detail below. You can also add your own custom calculations to leverage the capabilities of the Oracle OLAP option, either by selecting existing template types such as moving averages to defining true custom expressions. The 11g OLAP option now supports percentage based summarization (the amount of data to precompute and store), this is available from the option 'cost based aggregation' in the cube's configuration. Ensure all measure-dimensions level based aggregation is switched off (on the cube-dimension panel) - previously level based aggregation was the only option. The 11g generated code now uses the new unified API as you see below, to generate the code, OWB needs a valid connection to a real schema, this was not needed before 11gR2 and is a new requirement since the OLAP API which OWB uses is not an offline one. Once all of the objects are deployed and the maps executed then we get to the fun stuff! How can we analyze the data? One option which is powerful and at many users' fingertips is using Microsoft Excel PivotTables® and PivotCharts®, which can be used with your Oracle OLAP business intelligence data by utilizing Simba's MDX Provider for Oracle OLAP (see Simba site for details of cost). I'll leave the exotic reporting illustrations to the experts (see Bud's demonstration here), but with Simba's MDX Provider for Oracle OLAP its very simple to easily access the analytics stored in the database (all built and loaded via the OWB 11gR2 release) and get the regular features of Excel at your fingertips such as using the conditional formatting features for example. That's a very quick run through of the OWB 11gR2 with respect to Oracle 11g OLAP integration and the reporting using Simba's MDX Provider for Oracle OLAP. Not a deep-dive in any way but a quick overview to illustrate the design capabilities and integrations possible.

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Backing up my Windows Home Server to the Cloud&hellip;

    - by eddraper
    Ok, here’s my scenario: Windows Home Server with a little over 3TB of storage.  This includes many years of our home network’s PC backups, music, videos, etcetera. I’d like to get a backup off-site, and the existing APIs and apps such as CloudBerry Labs WHS Backup service are making it easy.  Now, all it’s down to is vendor and the cost of the actual storage.   So,  I thought I’d take a lazy Saturday morning and do some research on this and get the ball rolling.  What I discovered stunned me…   First off, the pricing for just about everything was loaded with complexity.  I learned that it wasn’t just about storage… it was about network usage, requests, sites, replication, and on and on. I really don’t see this as rocket science.  I have a disk image.  I want to put it in the cloud.  I’m not going to be be using it but once daily for incremental backups.  Sounds like a common scenario.  Yes, if “things get real” and my server goes down, I will need to bring down a lot of data and utilize a fair amount of vendor infrastructure.  However, this may never happen.  Offsite storage is an insurance policy.   The complexity of the cost structures, perhaps by design, create an environment where it’s incredibly hard to model bottom line costs and compare vendor all-up pricing.  As it is a “lazy Saturday morning,” I’m not in the mood for such antics and I decide to shirk the endeavor entirely.  Thus, I decided to simply fire up calc.exe and do some a simple arithmetic model based on price per GB.  I shuddered at the results.  Certainly something was wrong… did I misplace a decimal point?  Then I discovered CloudBerry’s own calculator.   Nope, I hadn’t misplaced those decimals after all.  Check it out (pricing based on 3174 GB):   Amazon S3 $398.00 per month $4761 per year Azure $396.75 per month $4761 per year Google $380.88 per month $4570.56 per year   Conclusion: Rampant crack smoking at vendors.  Seriously.  Out. Of. Their. Minds. Now, to Amazon’s credit, vision, and outright common sense, they had one offering which directly addresses my scenario:   Amazon Glacier $31.74 per month $380.88 per year   hmmm… It’s on the table.  Let’s see what it would cost to just buy some drives, an enclosure and cart them over to a friend’s house.   2 x 2TB Drives from NewEgg.com $199.99   Enclosure $39.99     $239.98   Carting data to back and forth to friend’s within walking distance pain   Leave drive unplugged at friend’s $0 for electricity   Possible data loss No way I can come and go every day.     I think I’ll think on this a bit more…

    Read the article

  • Lies, damned lies, and statistics Part 2

    - by Maria Colgan
    There was huge interest in our OOW session last year on Managing Optimizer Statistics. It seems statistics and the maintenance of them continues to baffle people. In order to help dispel the mysteries surround statistics management we have created a two part white paper series on Optimizer statistics.  Part one of this series was released in November last years and describes in detail, with worked examples, the different concepts of Optimizer statistics. Today we have published part two of the series, which focuses on the best practices for gathering statistics, and examines specific use cases including, the fears that surround histograms and statistics management of volatile tables like Global Temporary Tables. Here is a quick look at the Introduction and the start of the paper. You can find the full paper here. Happy Reading! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif";} Introduction The Oracle Optimizer examines all of the possible plans for a SQL statement and picks the one with the lowest cost, where cost represents the estimated resource usage for a given plan. In order for the Optimizer to accurately determine the cost for an execution plan it must have information about all of the objects (table and indexes) accessed in the SQL statement as well as information about the system on which the SQL statement will be run. This necessary information is commonly referred to as Optimizer statistics. Understanding and managing Optimizer statistics is key to optimal SQL execution. Knowing when and how to gather statistics in a timely manner is critical to maintaining acceptable performance. This whitepaper is the second of a two part series on Optimizer statistics. The first part of this series, Understanding Optimizer Statistics, focuses on the concepts of statistics and will be referenced several times in this paper as a source of additional information. This paper will discuss in detail, when and how to gather statistics for the most common scenarios seen in an Oracle Database. The topics are · How to gather statistics · When to gather statistics · Improving the efficiency of gathering statistics · When not to gather statistics · Gathering other types of statistics How to gather statistics The preferred method for gathering statistics in Oracle is to use the supplied automatic statistics-gathering job. Automatic statistics gathering job The job collects statistics for all database objects, which are missing statistics or have stale statistics by running an Oracle AutoTask task during a predefined maintenance window. Oracle internally prioritizes the database objects that require statistics, so that those objects, which most need updated statistics, are processed first. The automatic statistics-gathering job uses the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure, which uses the same default parameter values as the other DBMS_STATS.GATHER_*_STATS procedures. The defaults are sufficient in most cases. However, it is occasionally necessary to change the default value of one of the statistics gathering parameters, which can be accomplished by using the DBMS_STATS.SET_*_PREF procedures. Parameter values should be changed at the smallest scope possible, ideally on a per-object bases. You can find the full paper here. Happy Reading! +Maria Colgan

    Read the article

  • CodePlex Daily Summary for Saturday, June 29, 2013

    CodePlex Daily Summary for Saturday, June 29, 2013Popular ReleasesAscend 3D: Ascend 2.0.1: Moved model loading into SceneNode.Load static method Updated AscendViewer to use latest Ascend buildUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????SQL Server Data Compare: DB Compare 0.1 Beta 1: Some bugs fixed. Do not forget to add reviews. :)Hogeschool Rotterdam Windows Phone Maps project: HRO Maps Sourcecode VS2010: Initiele versieUniversal Visualnovel Engine Tools: ns2uve: NS2UVE ONS???????? ????:.Net Framework 4.0 ????:UVE for WP8 1.2?? ??:update 2????????! ????1.?ONS???????????,???????nscript.dat?Icon.png??,???arc.nsa?default.ttf??。 2.??????,????bin??????exe?????dll???????????????。 3.??ns2uve.exe,??????,?????,????????? 4.?????????????.png??? ??:??????????nscript.dat??Icon.png?,??????。?????????????。 ????????????src???? CopyRight W-Otaku DEVAdjusting SharePoint Site Quota PowerShell: Adjusting.SharePoint.Site.Quota: Version 1.0 Features Display Database Size Display Quota Warning Threshold Display Quota Maximum Threshold Display Site Space Usage Change Quota Warning Threshold Change Quota Maximum ThresholdOutlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...New ProjectsA sample web app for AppHarbor: Just a little project to test the amazing apphorbor offering!Android_Traffic_Tracker: Android traffic trackingEASTester: EASTester This application shows how encoding, decoding and submission of Exchange Server ActiveSync (EAS) calls might be done. Everynet_TFS_SVN: this is a everynet projectFluentRoute: Make the task of configure ASP.NET MVC Routes much more easier! This lib gives you the possibility of using Fluent Configurafion,style.Google Music for Jamcast: This project adds Google Music browse and playback capabilities to Jamcast, a DLNA media server for Windows.GussanoExtension: My summaryHL7 SDK - Open Source CDA R2 Implemenation for .NET and COM: A set of open source libraries for creating, parsing, storing and converting HL7 Clinical Documents in .NET and COM environment.Hogeschool Rotterdam Windows Phone Maps project: Dit is een project gemaakt voor het vak INFPRJ07DT voor de Hogeschool Rotterdam. Hue For Both (Build 2013): A simple MVVM project for controlling Philips Hue lights on Windows 8 and Windows Phone 8.Key2Screen: This little helper will show all keystrokes on screen. This will be needed during a Kata to show the audience the uses keyboard shortcuts or to record them.MercerGOLD: A tool for managing information on worldwide employee benefits, compensation and human resource programs.Microsoft CRM 2011 True Unique Autonumber Creator: The Microsoft CRM 2011 True Unique Autonumber Creator provides functionality for generating unique numbers for any entity. Work for On-Premises and Online/CloudNewsAlerts: this is news ALERT PROJECTNTmdb: A wrapper for the TMDb API written in C# .Net 4.5.PowerShellCron: Windows Service to Schedule and Run PowerShell Scripts with Full Logging to Database of script output (all streams, including Write-Host).QlikView Extension - WebPageViewer2: QlikView Extension to display a web page in QlikView.Restafari - The REST Client Base: A REST Client base for your .Net projects. It is compatible with: - .Net 4.5 - .Net 4.0 - Windows Phone 8 - Windows Store applicationsRevolution Of SnowWhite: PC??????SharePoint Silverlight CSV Importer: Convert .csv data into SharePoint list items. A slick Silverlight control to map and import a .csv files to a sharepoint list. Includes transforms and keys.Silverlight AWS S3 Uploader: Silverlight 5 app to upload files to AWS S3SIM Card Manager: A Windows tool to read SIM card information and contentSoCafeShop: SoCafeShopTeam Foundation Server 2012 Sample Work Items for MSF Agile, CMMI, & SCRUM: This project provides sample work items that can be used in your MSF Agile, MSF CMMI, or SCRUM 2.0 Process Templaces in Team Foundation Server 2012.TidyVaca - an SF inspired restyle of the Tidy responsive Skin by Adammer: A San Francisco inspired restyling of Adammer's Tidy responsive skin.Tiny Forms Controls: The goal of this project is to create a library of Windows Forms and Web Forms controls and components.TMYS: Deneme projesi önemli bir sey degil

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • Introducing the First Global Web Experience Management Content Management System

    - by kellsey.ruppel
    By Calvin Scharffs, VP of Marketing and Product Development, Lingotek Globalizing online content is more important than ever. The total spending power of online consumers around the world is nearly $50 trillion, a recent Common Sense Advisory report found. Three years ago, enterprises would have to translate content into 37 language to reach 98 percent of Internet users. This year, it takes 48 languages to reach the same amount of users.  For companies seeking to increase global market share, “translate frequently and fast” is the name of the game. Today’s content is dynamic and ever-changing, covering the gamut from social media sites to company forums to press releases. With high-quality translation and localization, enterprises can tailor content to consumers around the world.  Speed and Efficiency in Translation When it comes to the “frequently and fast” part of the equation, enterprises run into problems. Professional service providers provide translated content in files, which company workers then have to manually insert into their CMS. When companies update or edit source documents, they have to hunt down all the translated content and change each document individually.  Lingotek and Oracle have solved the problem by making the Lingotek Collaborative Translation Platform fully integrated and interoperable with Oracle WebCenter Sites Web Experience Management. Lingotek combines best-in-class machine translation solutions, real-time community/crowd translation and professional translation to enable companies to publish globalized content in an efficient and cost-effective manner. WebCenter Sites Web Experience Management simplifies the creation and management of different types of content across multiple channels, including social media.  Globalization Without Interrupting the Workflow The combination of the Lingotek platform with WebCenter Sites ensures that process of authoring, publishing, targeting, optimizing and personalizing global Web content is automated, saving companies the time and effort of manually entering content. Users can seamlessly integrate translation into their WebCenter Sites workflows, optimizing their translation and localization across web, social and mobile channels in multiple languages. The original structure and formatting of all translated content is maintained, saving workers the time and effort involved with inserting the text translation and reformatting.  In addition, Lingotek’s continuous publication model addresses the dynamic nature of content, automatically updating the status of translated documents within the WebCenter Sites Workflow whenever users edit or update source documents. This enables users to sync translations in real time. The translation, localization, updating and publishing of Web Experience Management content happens in a single, uninterrupted workflow.  The net result of Lingotek Inside for Oracle WebCenter Sites Web Experience Management is a system that more than meets the need for frequent and fast global translation. Workflows are accelerated. The globalization of content becomes faster and more streamlined. Enterprises save time, cost and effort in translation project management, and can address the needs of each of their global markets in a timely and cost-effective manner.  About Lingotek Lingotek is an Oracle Gold Partner and is going to be one of the first Oracle Validated Integrator (OVI) partners with WebCenter Sites. Lingotek is also an OVI partner with Oracle WebCenter Content.  Watch a video about how Lingotek Inside for Oracle WebCenter Sites works! Oracle WebCenter will be hosting a webinar, “Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter," tomorrow, September 13th. To attend the webinar, please register now! For more information about Lingotek for Oracle WebCenter, please visit http://www.lingotek.com/oracle.

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • A* algorithm very slow

    - by Amaranth
    I have an programming a RTS game (I use XNA with C#). The pathfinding is working fine, except that when it has a lot of node to search in, there is a lag period of one or two seconds, it happens mainly when there is no path to the target destination, since it that situation there is more nodes to explore. I have the same problem when the path is shorter but selected more than 3 units (can't take the same path since the selected units can be in different part of the map). private List<NodeInfo> FindPath(Unit u, NodeInfo start, NodeInfo end) { Map map = GameInfo.GetInstance().GameMap; _nearestToTarget = start; start.MoveCost = 0; Vector2 endPosition = map.getTileByPos(end.X, end.Y).Position; //getTileByPos simply gets the tile in a 2D array with the X and Y indexes start.EstimatedRemainingCost = (int)(endPosition - map.getTileByPos(start.X, start.Y).Position).Length(); start.Parent = null; List<NodeInfo> openedNodes = new List<NodeInfo>(); ; List<NodeInfo> closedNodes = new List<NodeInfo>(); Point[] movements = GetMovements(u.UnitType); openedNodes.Add(start); while (!closedNodes.Contains(end) && openedNodes.Count > 0) { //Loop in nodes to find lowest cost NodeInfo currentNode = FindLowestCostOpenedNode(openedNodes); openedNodes.Remove(currentNode); closedNodes.Add(currentNode); Vector2 previousMouvement; if (currentNode.Parent == null) { previousMouvement = ConvertRotationToDirectionVector(u.Rotation); } else { previousMouvement = map.getTileByPos(currentNode.X, currentNode.Y).Position - map.getTileByPos(currentNode.Parent.X, currentNode.Parent.Y).Position; previousMouvement.Normalize(); } //For each neighbor foreach (Point movement in movements) { Point exploredGridPos = new Point(currentNode.X + movement.X, currentNode.Y + movement.Y); //Checks if valid move and checks if not if closed nodes list if (ValidNavigableNode(u.UnitType, new Point(currentNode.X, currentNode.Y), exploredGridPos) && !closedNodes.Contains(_gridMap[exploredGridPos.Y, exploredGridPos.X])) { NodeInfo exploredNode = _gridMap[exploredGridPos.Y, exploredGridPos.X]; Tile.TileType exploredTerrain = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).TerrainType; if(openedNodes.Contains(exploredNode)) { int newCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); if (newCost < exploredNode.MoveCost) { exploredNode.Parent = currentNode; exploredNode.MoveCost = newCost; //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); } } else { exploredNode.Parent = currentNode; exploredNode.MoveCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); Vector2 exploredNodeWorldPos = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).Position; exploredNode.EstimatedRemainingCost = (int)(endPosition - exploredNodeWorldPos).Length(); //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); openedNodes.Add(exploredNode); } } } } return closedNodes; } After that, I simply check if the end node is contained in the returned nodes. If so, I add the end node and each parent until I reach the start. If not, I add the nearestToTarget and each parent until I reach the start. I added a condition before calling FindPath so that only one unit can call a find path each frame (60 frame per second), but it makes no difference. I thought maybe I could solve this by allowing the find path to run in background while the game continues to run correctly, even if it takes a few frame (it is currently sequential sonce it is called in the update() of the unit if there's a target location but no path), but I don't really know how... I also though about sorting my opened nodes list by cost so I don't have to loop them, but I don't know if that would have an effect on the performance... Would there be other solutions? P.S. In the code, when I get the Move Cost, I check if the unit has to turn to perform the move, and the terrain type, nothing hard to do.

    Read the article

  • Help with dynamic range compression function (audio)

    - by MusiGenesis
    I am writing a C# function for doing dynamic range compression (an audio effect that basically squashes transient peaks and amplifies everything else to produce an overall louder sound). I have written a function that does this (I think): public static void Compress(ref short[] input, double thresholdDb, double ratio) { double maxDb = thresholdDb - (thresholdDb / ratio); double maxGain = Math.Pow(10, -maxDb / 20.0); for (int i = 0; i < input.Length; i += 2) { // convert sample values to ABS gain and store original signs int signL = input[i] < 0 ? -1 : 1; double valL = (double)input[i] / 32768.0; if (valL < 0.0) { valL = -valL; } int signR = input[i + 1] < 0 ? -1 : 1; double valR = (double)input[i + 1] / 32768.0; if (valR < 0.0) { valR = -valR; } // calculate mono value and compress double val = (valL + valR) * 0.5; double posDb = -Math.Log10(val) * 20.0; if (posDb < thresholdDb) { posDb = thresholdDb - ((thresholdDb - posDb) / ratio); } // measure L and R sample values relative to mono value double multL = valL / val; double multR = valR / val; // convert compressed db value to gain and amplify val = Math.Pow(10, -posDb / 20.0); val = val / maxGain; // re-calculate L and R gain values relative to compressed/amplified // mono value valL = val * multL; valR = val * multR; double lim = 1.5; // determined by experimentation, with the goal // being that the lines below should never (or rarely) be hit if (valL > lim) { valL = lim; } if (valR > lim) { valR = lim; } double maxval = 32000.0 / lim; // convert gain values back to sample values input[i] = (short)(valL * maxval); input[i] *= (short)signL; input[i + 1] = (short)(valR * maxval); input[i + 1] *= (short)signR; } } and I am calling it with threshold values between 10.0 db and 30.0 db and ratios between 1.5 and 4.0. This function definitely produces a louder overall sound, but with an unacceptable level of distortion, even at low threshold values and low ratios. Can anybody see anything wrong with this function? Am I handling the stereo aspect correctly (the function assumes stereo input)? As I (dimly) understand things, I don't want to compress the two channels separately, so my code is attempting to compress a "virtual" mono sample value and then apply the same degree of compression to the L and R sample value separately. Not sure I'm doing it right, however. I think part of the problem may the "hard knee" of my function, which kicks in the compression abruptly when the threshold is crossed. I think I may need to use a "soft knee" like this: Can anybody suggest a modification to my function to produce the soft knee curve?

    Read the article

  • Test whether pixel is inside the blobs for ofxOpenCV

    - by mia
    I am doing an application of the concept of the dodgeball and need to test of the pixel of the ball is in the blobs capture(which is the image of the player) I am stucked and ran out of idea of how to implement it. I manage to do a little progress which have the blobs but I not sure how to test it. Please help. I am a newbie who in a desperate condition. Thank you. This is some of my code. void testApp::setup(){ #ifdef _USE_LIVE_VIDEO vidGrabber.setVerbose(true); vidGrabber.initGrabber(widthS,heightS); #else vidPlayer.loadMovie("fingers.mov"); vidPlayer.play(); #endif widthS = 320; heightS = 240; colorImg.allocate(widthS,heightS); grayImage.allocate(widthS,heightS); grayBg.allocate(widthS,heightS); grayDiff.allocate(widthS,heightS); ////<---what I want bLearnBakground = true; threshold = 80; //////////circle////////////// counter = 0; radius = 0; circlePosX = 100; circlePosY=200; } void testApp::update(){ ofBackground(100,100,100); bool bNewFrame = false; #ifdef _USE_LIVE_VIDEO vidGrabber.grabFrame(); bNewFrame = vidGrabber.isFrameNew(); #else vidPlayer.idleMovie(); bNewFrame = vidPlayer.isFrameNew(); #endif if (bNewFrame){ if (bLearnBakground == true){ grayBg = grayImage; // the = sign copys the pixels from grayImage into grayBg (operator overloading) bLearnBakground = false; } #ifdef _USE_LIVE_VIDEO colorImg.setFromPixels(vidGrabber.getPixels(),widthS,heightS); #else colorImg.setFromPixels(vidPlayer.getPixels(),widthS,heightS); #endif grayImage = colorImg; grayDiff.absDiff(grayBg, grayImage); grayDiff.threshold(threshold); contourFinder.findContours(grayDiff, 20, (340*240)/3, 10, true); // find holes } ////////////circle//////////////////// counter = counter + 0.05f; if(radius>=50){ circlePosX = ofRandom(10,300); circlePosY = ofRandom(10,230); } radius = 5 + 3*(counter); } void testApp::draw(){ // draw the incoming, the grayscale, the bg and the thresholded difference ofSetColor(0xffffff); //white colour grayDiff.draw(10,10);// draw start from point (0,0); // we could draw the whole contour finder // or, instead we can draw each blob individually, // this is how to get access to them: for (int i = 0; i < contourFinder.nBlobs; i++){ contourFinder.blobs[i].draw(10,10); } ///////////////circle////////////////////////// //let's draw a circle: ofSetColor(0,0,255); char buffer[255]; float a = radius; sprintf(buffer,"radius = %i",a); ofDrawBitmapString(buffer, 120, 300); if(radius>=50) { ofSetColor(255,255,255); counter = 0; } else{ ofSetColor(255,0,0); } ofFill(); ofCircle(circlePosX,circlePosY,radius); }

    Read the article

  • How to determine edges in an images optimally?

    - by SorinA.
    I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this: the result should be an image with the msn content without the white margins(left& right). I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect = x1,y1,x2,y2 data. I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar.. Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'? public static CropRect EdgeDetection(Bitmap Image, float Threshold) { CropRect cropRectangle = new CropRect(); int lowestX = 0; int lowestY = 0; int largestX = 0; int largestY = 0; lowestX = Image.Width; lowestY = Image.Height; //find the lowest X bound; for (int y = 0; y < Image.Height - 1; ++y) { for (int x = 0; x < Image.Width - 1; ++x) { Color currentColor = Image.GetPixel(x, y); Color tempXcolor = Image.GetPixel(x + 1, y); Color tempYColor = Image.GetPixel(x, y + 1); if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) + ((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) + ((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold)) { if (lowestX > x) lowestX = x; if (largestX < x) largestX = x; } if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) + ((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) + ((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold)) { if (lowestY > y) lowestY = y; if (largestY < y) largestY = y; } } } if (lowestX < Image.Width / 4) cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0; else cropRectangle.X = 0; if (lowestY < Image.Height / 4) cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0; else cropRectangle.Y = 0; cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8; cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8; return cropRectangle; } }

    Read the article

  • How to determine edges in an image optimally?

    - by SorinA.
    I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this: the result should be an image with the msn content without the white margins(left& right). I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect = x1,y1,x2,y2 data. I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar.. Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'? public static CropRect EdgeDetection(Bitmap Image, float Threshold) { CropRect cropRectangle = new CropRect(); int lowestX = 0; int lowestY = 0; int largestX = 0; int largestY = 0; lowestX = Image.Width; lowestY = Image.Height; //find the lowest X bound; for (int y = 0; y < Image.Height - 1; ++y) { for (int x = 0; x < Image.Width - 1; ++x) { Color currentColor = Image.GetPixel(x, y); Color tempXcolor = Image.GetPixel(x + 1, y); Color tempYColor = Image.GetPixel(x, y + 1); if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) + ((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) + ((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold)) { if (lowestX > x) lowestX = x; if (largestX < x) largestX = x; } if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) + ((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) + ((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold)) { if (lowestY > y) lowestY = y; if (largestY < y) largestY = y; } } } if (lowestX < Image.Width / 4) cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0; else cropRectangle.X = 0; if (lowestY < Image.Height / 4) cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0; else cropRectangle.Y = 0; cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8; cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8; return cropRectangle; } }

    Read the article

  • output image is not displayed

    - by gerry chocolatos
    so, im doing an image detection which i have to process on each red, green, n blue element to get the edge map and combine them become one to show the output. but it doesnt show my output image would anyone pls be kind enough to help me? here is my code so far. //get the red element process_red = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr = buff_red.getRGB(j, i); int red = (clr & 0x00ff0000) >> 16; red = (0xFF<<24)|(red<<16)|(red<<8)|red; process_red[counter] = red; counter++; } } //set threshold value for red element int threshold = 100; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { int bin = (buff_red.getRGB(x, y) & 0x000000ff); if (bin < threshold) bin = 0; else bin = 255; buff_red.setRGB(x,y, 0xff000000 | bin << 16 | bin << 8 | bin); } } and i do the same way for my green n blue elements. and then i wanted to get to combination of the three by doing it this way: //combine the three elements process_combine = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr_a = buff_red.getRGB(j, i); int ar = clr_a & 0x000000ff; int clr_b = buff_green.getRGB(j, i); int bg = clr_b & 0x000000ff; int clr_c = buff_blue.getRGB(j, i); int cb = clr_b & 0x000000ff; int alpha = 0xff000000; int combine = alpha|(ar<<16)|(bg<<8)|cb; process_combine[counter] = combine; counter++; } } buff_rgb = new BufferedImage(width,height, BufferedImage.TYPE_INT_ARGB); Graphics rgb; rgb = buff_rgb.getGraphics(); rgb.drawImage(output_rgb, 0, 0, null); rgb.dispose(); repaint(); and to show the output whic is from the combining process, i use a draw method: g.drawImage(buff_rgb,800,100,this); but still it doesnt show the image. can anyone pls help me? ur help is really appreciated. thanks.

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >