Search Results

Search found 1727 results on 70 pages for 'powerful'.

Page 49/70 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Windows 7 64-bit Google Chrome Makes Desktop Unresponsive

    - by Meengla
    For the past 1-2 weeks my desktop becomes unresponsive sometimes. The problem seems to happen as I load a page in Google Chrome; it could be any page though today it happened when I tried to load a page with Flash content in it. The problem is not consistent and there is no pattern. I can do a virus scan using some software (not antivirus installed--never needed that) but I strongly suspect it is some software I installed or some Windows Update. This is a very powerful Dell Optiplex 990 with 16GB RAM, so Memory shouldn't be an issue. When the problem happens the cursor becomes spinning even over the taskbar and control alt delete takes a long time. Eventually I get a message that 'this program is not responding' with end/close and 'cancel' icon. But repeated end or cancel does nothing. Then the menu for ctrl alt del comes up. The rest of the applications keep running fine though I can't get to them because the cursor is the wait-cursor. What is happening? How can I find what exactly caused the problem? Thanks.

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • How do I change file protections running XP on a disk from Windows Server?

    - by cdkMoose
    I had a Windows Server 2003 machine running at home, along with my desktop which I use for development. Server went belly up, but since my desktop is reasonably powerful, I figured I would move the disk from the file server (it was OK) into my XP machine to keep all of the files. Disk comes up fine and shows all of the files. I have been getting access denied errors when trying to work with some of the files. When I display attributes in Explorer, none of them are marked Read-Only. When I view properties on the directories, the Read-Only checkbox is not checked, but has a green background(which I thought meant mixed usage for files in the directory). When I click on the checkbox to clear it and click Apply, the disk does some work and all looks well. However, I continue to get the Access Denied errors, the files still don't show any Read-Only attribute and the directory properties shows the green background again on the Read-Only checkbox. I did check the box which says to apply the change to the folder and all files /subfilders under it. I am assuming that the issue relates to userids/permissions carried over from the Server install. So, why does it let me think I can change the attribute when I can't and how can I correct this problem so that the disk correctly recognizes the ids from XP?

    Read the article

  • PC dies when running at 100% CPU

    - by user155631
    I recently wrote some Java code to generate images of the Mandelbrot set (fractal). I made use of the new Fork/Join facility in Java 7 to run separate threads on all four cores (2 real, 2 virtual)simultaneously, using a large number of iterations for greater accuracy. The problem is, the process runs fine for about a minute, and then it's as if someone has pulled the plug and the PC just dies. I thought it must be the CPUs overheating, so I ran Real Temp to monitor the temperature. It's an Intel i3 processor. I can see the temperature creeping up to 70 degrees, and then it seems to level off there and run for about another 30 seconds before dying. According to Real Temp, there's still a gap of 35 degrees between the actual temperature and TJ max. I also tried disabling "CPU TM function" in the BIOS, but the problem still occurs. A colleague suggested that it might be a power supply problem, so I borrowed a more powerful PSU (can't remember what wattage it was, but it's higher than mine which is 500W). The exact same thing still happens though. Is anyone able to suggest what the problem might be, or what I can try next?

    Read the article

  • TortoiseSVN client slows Explorer to a crawl in Windows XP running in Parallels

    - by Cory Larson
    I thought I'd make my first SuperUser question relatively simple, though it's the kind of question that may not get many responses as I'm not directly involved with the issue. A colleague does his development in Windows XP running in Parallels on his Mac. We've just migrated our VSS repository to SVN, and we've gone with TortoiseSVN as our client of choice with the Ankhsvn plugin for Visual Studio. On his XP instance, after installing TortoiseSVN, browsing through folders using Explorer is extremely slow; about 15 - 30 seconds before the contents of the next folder displays. It's the slowest when opening My Computer. Once he reaches a folder that contains the working content of an SVN project, Explorer behaves quickly again as expected. It seems that TortoiseSVN may be spending a bunch of time searching subfolders for stuff so it can do its icon-overlay thing, but that's just a guess. I've used TortoiseSVN for years on both XP and Vista on far less powerful machines without any issues with Explorer, so I'm attributing the slowness to it being run in a VM, though that may not be the actual issue. So has anyone encountered similar performance issues, and/or know of a fix? Keep in mind that any requests to make changes to his configuration will need to be communicated and thus my response time might be slow. Thanks everyone!

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • Free Hosting control panel

    - by John Maxim
    Hello All, I'm in the mid of researching for one of the best hosting control panels. The server I run is Ubuntu and I have some experience with ISPConfig 2 & 3. Since I haven't explored any others available, what are the recommended ones for an Ubuntu server? I asked because I find that there seems to be some disabling and modifications required for an Ubuntu server if I need to use ispconfig which causes the server to change its actual way of running. It's quite good though, but any more recommended ones ? Something more organic? which doesn't require much breaking and changing. I'm not asking for the simple one, I don't mind going extra mile to install a powerful one but just try sticking with most Ubuntu's conventions will be an ideal one for me. And of course, if there happens to be something that meets the requirement as mentioned "Ubuntu conventions" and also simple to install at the same time, that'd be a bonus. Thanks in advance.

    Read the article

  • Last step in HDD Recovery (fixing windows)

    - by Atom Computing
    My dad’s hard drive corrupted which was a result of many bad sectors. Anyway, I made a clone of the drive and have now repaired it totally (recreating the MBR and MFT) and doing a series of ChkDsk's on it. I can now see all the files and folder on it and it is all intact. I currently have it as a slave in my computer (where I was doing all the repairs). When putting it back into the computer, it comes up with "A disk read error occurred: Press Ctrl + Alt + Del to Restart". I don’t know why this is happening but think it might have something to do with file permissions. I have tried a start-up recovery on the Vista boot CD and it found no problems. When trying to apply file permissions (and creating file perms for the SYSTEM group (as it didn’t have any for SYSTEM group)) it couldn't apply them for some of the System32 folder files. I have tried applying them as admin and with as powerful privileges I can get. All to no avail. When it is in my PC I can boot it up (I added it into my bootloader) and it boots up fine except when it logs in it comes up with the error - "Rundll32.exe - Windows cannot access the specified device, path or file. You may not have the appropriate permissions to access the item" This message keeps coming back and nothing loads at all. Any help would be greatly received as I have got so far with the data recovery and want to avoid a reformat at all costs due to the vast number of programs installed and I don’t have much time on my hands! Thanks

    Read the article

  • Rails application keeps timing out when attempting to connect to Postgresql DB

    - by Corillian
    I'm hosting a postgresql database on a small windows azure Ubuntu 13.04 VM with a default postgresql.conf. I have a Rails application running on a medium windows azure Ubuntu 13.04 VM. When accessing the postgresql database the rails application is constantly timing out. In its database.yml I have the connection pool size set to 120 and the timeout set to 15 seconds. Despite this my rails logs are full of the following error message: ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds (waited 5.0023203 seconds). The max pool size is currently 120; consider increasing it. My postgresql.conf has a max connection limit of 120, making it any larger prevents the server from being able to successfully restart. I've also made sure that ssl was off in the postgresql.conf per this article but beyond that I have no idea what's going on. My postgresql logs don't contain any info indicating something is going wrong. My website is getting ~1k hits per day so perhaps a small VM instance just isn't powerful enough? I appreciate any assistance! [Edit1] The postgresql database is in a separate cloud service within the same affinity group. For example: db small VM: mydatabase.cloudapp.net (Affinity Group US East) forums medium VM: myforums.cloudapp.net (Affinity Group US East) On the database server I have opened port 5432. The connection to the database server from the forums server is using its hostname. Is it possible that the DNS resolution is what's taking so long?

    Read the article

  • Is it possible to get ESC to behave as an actual escape key?

    - by leftaroundabout
    So I have finally switched, not so much because I'm yet convinced Emacs in itself is the better editor but because it certainly does have more powerful extensions. I am still using vim-mode though, perhaps that's part of my problem... but I really don't intend to abandon the modes-approach, so I'll probably stay with it. I'm getting along quite well, but one thing I find really unnerving is the behaviour of the esc key (which I have in the shift-lock position). I'm used to relying on this a lot as more or less a "panic key", which may not be nice but I find allows me to work generally quite a bit less caring about the keystrokes themselves, and thus faster. What I'd like this key to do is just get me out of any minibuffer or special editing mode into a well-defined normal state. Perhaps most importantly, I would like it to not do anything unrelated, Simulate meta. What do I have an alt key for? Close windows I'm not even in at the time. Getting interpreted as the final key in some key sequence. ... Is it possible to turn all that off and make esc an actual escape key? Vim-mode does make it behave kind of as I like in some situations, but when other plugins are involves this often breaks. Alternatively, are there different options that might suit my kind of workflow?

    Read the article

  • Siege - running a stress test benchmark

    - by morgoth84
    I need to do a benchmark test of a HTTPS server using Siege, to see how it behaves under massive load. I'm initiating tests from another machine which is quite powerful and it is connected to the same physical switch the server is connected on. But when I initiate a test, I can't get it to make more than 170 requests per second. With this load the server's CPU usage is at 15-20% and the average response time for a request is approx. 0.03 seconds. Load of the client machine is approx. at 10%. So, I gradually increase the number of users in Siege (the number of worker threads) and request rate linearly increases up to 170 reqs/sec, but it never gets over it. No matter how many more worker threads I start, the load on the server is never more than 20% (and the client's load also doesn't increase any more). How can I overcome this? I've googled a bit and found out that after a request is completed, a socket associated with one ephermal port remains in WAIT_TIME state for some time during which it can't be reused. I tried to overcome this by doing these things: sysctl -w net.ipv4.ip_local_port_range="1024 65535" echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle Oh, and the client machine is a Linux (RedHat, I think, but I'm not sure). Any help would be appreciated.

    Read the article

  • AMD processors witn graphics card bundled [closed]

    - by shybovycha
    Sorry for posting this question here - just don't know to which StackExchange website i should be writing. I've heard AMD created processors with video card bundled. So now these processors should work as fast as just usual processors with discrete video card but AMD's ones should use less power and spread less warm. Some googling around gave me the result like "AMD processors of A-series". They were mentioned to be build using that technology i described above. But on the other hand, we have a small rate of publishing and not-very-good quality of AMD drivers. I am a game-developer and web-developer so i need a powerful processor and graphics card and a lot of RAM on board (to make it possible to create a sample Grails application, for example or to create some 3D models in Maya/Cinema4D, for instance). Still i want my battery to be a long-living one, so power usage is a bit critical for me. So, my questions are: are there any processor building technology like i've described and which series they are (if they exist)? which processor shall fit the laptop the best: AMD one or i5/i7 on with nVidia graphics card for the purposes mentioned above?

    Read the article

  • How to avoid duplicates when copying files that have been renamed at the destination

    - by Benoitt
    I have to get pictures from a folder – with subfolders which are updated automatically – with their extensions. These files have to be copied in a folder where a website based on PHP will edit them (by renaming and creating an XML file) to be downloadable and integrated in an XML feed. Because of the rename function of the script, when I perform the copy gain, all the files are duplicated, because the script has renamed the original ones already. I've tried a few things with rsync but I'm looking for something more powerful because I can't copy files with an external "history". #!/bin/bash find '/home/name/picture' -name '*.jpg' | while read FILE ; do rsync --backup --backup-dir=incremental --suffix=.old "$FILE" /var/www/media ; done wget --spider 'http://myscript.php' ; #exit 0 PS: As a little addition, I'd like to replace '.' with a 'space' just after the *.jpeg copy. My PHP script has some problem to define files with comma because of the extension. I'm finking about a command with find – like I did before – with a sed function? Is that a good idea?

    Read the article

  • Building a small server farm

    - by RayQuang
    Hi, I am planning to set up a Tech startup company that will provide web application solutions. Eventually we hope to diversify into different areas such as possibly social media or other services. For now we plan on running a high demand (from 1000 to 10,000 users in the first year) website running the application. This includes a MySQL database backend, email, and development servers. My question is then, what type of server arrangement will work best, that is ti say should i have a small cluster of ultra high power machines (E.G. Top of the range Xeons, with 12GB RAM) or will it be better to have more less powerful servers load balanced? Should I go for 1 - 2 u servers rack mounted ot would it be beter for it just to be tower servers for maintainability? Finally I would also like to know what kind of Internet and router i would need, I currently have 10mbit down and barely 1 mbit up, but soon our area will have a fiber optic connection with international speeds of up to 25 mbit / sec. Thanks in advance, RayQuang UPDATE: sorry I forgot to mention it, the platform that I will be using is PHP with the APC code cache, Probably running Debian.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • An Introduction to jQuery Templates

    - by Stephen Walther
    The goal of this blog entry is to provide you with enough information to start working with jQuery Templates. jQuery Templates enable you to display and manipulate data in the browser. For example, you can use jQuery Templates to format and display a set of database records that you have retrieved with an Ajax call. jQuery Templates supports a number of powerful features such as template tags, template composition, and wrapped templates. I’ll concentrate on the features that I think that you will find most useful. In order to focus on the jQuery Templates feature itself, this blog entry is server technology agnostic. All the samples use HTML pages instead of ASP.NET pages. In a future blog entry, I’ll focus on using jQuery Templates with ASP.NET Web Forms and ASP.NET MVC (You can do some pretty powerful things when jQuery Templates are used on the client and ASP.NET is used on the server). Introduction to jQuery Templates The jQuery Templates plugin was developed by the Microsoft ASP.NET team in collaboration with the open-source jQuery team. While working at Microsoft, I wrote the original proposal for jQuery Templates, Dave Reed wrote the original code, and Boris Moore wrote the final code. The jQuery team – especially John Resig – was very involved in each step of the process. Both the jQuery community and ASP.NET communities were very active in providing feedback. jQuery Templates will be included in the jQuery core library (the jQuery.js library) when jQuery 1.5 is released. Until jQuery 1.5 is released, you can download the jQuery Templates plugin from the jQuery Source Code Repository or you can use jQuery Templates directly from the ASP.NET CDN. The documentation for jQuery Templates is already included with the official jQuery documentation at http://api.jQuery.com. The main entry for jQuery templates is located under the topic plugins/templates. A Basic Sample of jQuery Templates Let’s start with a really simple sample of using jQuery Templates. We’ll use the plugin to display a list of books stored in a JavaScript array. Here’s the complete code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title>Intro</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg" }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg" }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg" }, ]; // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html> When you open this page in a browser, a list of books is displayed: There are several things going on in this page which require explanation. First, notice that the page uses both the jQuery 1.4.4 and jQuery Templates libraries. Both libraries are retrieved from the ASP.NET CDN: <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> You can use the ASP.NET CDN for free (even for production websites). You can learn more about the files included on the ASP.NET CDN by visiting the ASP.NET CDN documentation page. Second, you should notice that the actual template is included in a script tag with a special MIME type: <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> This template is displayed for each of the books rendered by the template. The template displays a book picture, title, and price. Notice that the SCRIPT tag which wraps the template has a MIME type of text/x-jQuery-tmpl. Why is the template wrapped in a SCRIPT tag and why the strange MIME type? When a browser encounters a SCRIPT tag with an unknown MIME type, it ignores the content of the tag. This is the behavior that you want with a template. You don’t want a browser to attempt to parse the contents of a template because this might cause side effects. For example, the template above includes an <img> tag with a src attribute that points at “BookPictures/${picture}”. You don’t want the browser to attempt to load an image at the URL “BookPictures/${picture}”. Instead, you want to prevent the browser from processing the IMG tag until the ${picture} expression is replaced by with the actual name of an image by the jQuery Templates plugin. If you are not worried about browser side-effects then you can wrap a template inside any HTML tag that you please. For example, the following DIV tag would also work with the jQuery Templates plugin: <div id="bookTemplate" style="display:none"> <div> <h2>${title}</h2> price: ${formatPrice(price)} </div> </div> Notice that the DIV tag includes a style=”display:none” attribute to prevent the template from being displayed until the template is parsed by the jQuery Templates plugin. Third, notice that the expression ${…} is used to display the value of a JavaScript expression within a template. For example, the expression ${title} is used to display the value of the book title property. You can use any JavaScript function that you please within the ${…} expression. For example, in the template above, the book price is formatted with the help of the custom JavaScript formatPrice() function which is defined lower in the page. Fourth, and finally, the template is rendered with the help of the tmpl() method. The following statement selects the bookTemplate and renders an array of books using the bookTemplate. The results are appended to a DIV element named bookContainer by using the standard jQuery appendTo() method. $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); Using Template Tags Within a template, you can use any of the following template tags. {{tmpl}} – Used for template composition. See the section below. {{wrap}} – Used for wrapped templates. See the section below. {{each}} – Used to iterate through a collection. {{if}} – Used to conditionally display template content. {{else}} – Used with {{if}} to conditionally display template content. {{html}} – Used to display the value of an HTML expression without encoding the value. Using ${…} or {{= }} performs HTML encoding automatically. {{= }}-- Used in exactly the same way as ${…}. {{! }} – Used for displaying comments. The contents of a {{!...}} tag are ignored. For example, imagine that you want to display a list of blog entries. Each blog entry could, possibly, have an associated list of categories. The following page illustrates how you can use the { if}} and {{each}} template tags to conditionally display categories for each blog entry:   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>each</title> <link href="1_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="blogPostContainer"></div> <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> var blogPosts = [ { postTitle: "How to fix a sink plunger in 5 minutes", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna.", categories: ["HowTo", "Sinks", "Plumbing"] }, { postTitle: "How to remove a broken lightbulb", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna.", categories: ["HowTo", "Lightbulbs", "Electricity"] }, { postTitle: "New associate website", postEntry: "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna." } ]; // Render the blog posts $("#blogPostTemplate").tmpl(blogPosts).appendTo("#blogPostContainer"); </script> </body> </html> When this page is opened in a web browser, the following list of blog posts and categories is displayed: Notice that the first and second blog entries have associated categories but the third blog entry does not. The third blog entry is “Uncategorized”. The template used to render the blog entries and categories looks like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script> Notice the special expression $value used within the {{each}} template tag. You can use $value to display the value of the current template item. In this case, $value is used to display the value of each category in the collection of categories. Template Composition When building a fancy page, you might want to build a template out of multiple templates. In other words, you might want to take advantage of template composition. For example, imagine that you want to display a list of products. Some of the products are being sold at their normal price and some of the products are on sale. In that case, you might want to use two different templates for displaying a product: a productTemplate and a productOnSaleTemplate. The following page illustrates how you can use the {{tmpl}} tag to build a template from multiple templates:   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Composition</title> <link href="2_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContainer"> <h1>Products</h1> <div id="productListContainer"></div> <!-- Show list of products using composition --> <script id="productListTemplate" type="text/x-jQuery-tmpl"> <div> {{if onSale}} {{tmpl "#productOnSaleTemplate"}} {{else}} {{tmpl "#productTemplate"}} {{/if}} </div> </script> <!-- Show product --> <script id="productTemplate" type="text/x-jQuery-tmpl"> ${name} </script> <!-- Show product on sale --> <script id="productOnSaleTemplate" type="text/x-jQuery-tmpl"> <b>${name}</b> <img src="images/on_sale.png" alt="On Sale" /> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> var products = [ { name: "Laptop", onSale: false }, { name: "Apples", onSale: true }, { name: "Comb", onSale: false } ]; $("#productListTemplate").tmpl(products).appendTo("#productListContainer"); </script> </div> </body> </html>   In the page above, the main template used to display the list of products looks like this: <script id="productListTemplate" type="text/x-jQuery-tmpl"> <div> {{if onSale}} {{tmpl "#productOnSaleTemplate"}} {{else}} {{tmpl "#productTemplate"}} {{/if}} </div> </script>   If a product is on sale then the product is displayed with the productOnSaleTemplate (which includes an on sale image): <script id="productOnSaleTemplate" type="text/x-jQuery-tmpl"> <b>${name}</b> <img src="images/on_sale.png" alt="On Sale" /> </script>   Otherwise, the product is displayed with the normal productTemplate (which does not include the on sale image): <script id="productTemplate" type="text/x-jQuery-tmpl"> ${name} </script>   You can pass a parameter to the {{tmpl}} tag. The parameter becomes the data passed to the template rendered by the {{tmpl}} tag. For example, in the previous section, we used the {{each}} template tag to display a list of categories for each blog entry like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{each categories}} <i>${$value}</i> {{/each}} {{else}} Uncategorized {{/if}} </script>   Another way to create this template is to use template composition like this: <script id="blogPostTemplate" type="text/x-jQuery-tmpl"> <h1>${postTitle}</h1> <p> ${postEntry} </p> {{if categories}} Categories: {{tmpl(categories) "#categoryTemplate"}} {{else}} Uncategorized {{/if}} </script> <script id="categoryTemplate" type="text/x-jQuery-tmpl"> <i>${$data}</i> &nbsp; </script>   Using the {{each}} tag or {{tmpl}} tag is largely a matter of personal preference. Wrapped Templates The {{wrap}} template tag enables you to take a chunk of HTML and transform the HTML into another chunk of HTML (think easy XSLT). When you use the {{wrap}} tag, you work with two templates. The first template contains the HTML being transformed and the second template includes the filter expressions for transforming the HTML. For example, you can use the {{wrap}} template tag to transform a chunk of HTML into an interactive tab strip: When you click any of the tabs, you see the corresponding content. This tab strip was created with the following page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Wrapped Templates</title> <style type="text/css"> body { font-family: Arial; background-color:black; } .tabs div { display:inline-block; border-bottom: 1px solid black; padding:4px; background-color:gray; cursor:pointer; } .tabs div.tabState_true { background-color:white; border-bottom:1px solid white; } .tabBody { border-top:1px solid white; padding:10px; background-color:white; min-height:400px; width:400px; } </style> </head> <body> <div id="tabsView"></div> <script id="tabsContent" type="text/x-jquery-tmpl"> {{wrap "#tabsWrap"}} <h3>Tab 1</h3> <div> Content of tab 1. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 2</h3> <div> Content of tab 2. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 3</h3> <div> Content of tab 3. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> {{/wrap}} </script> <script id="tabsWrap" type="text/x-jquery-tmpl"> <div class="tabs"> {{each $item.html("h3", true)}} <div class="tabState_${$index === selectedTabIndex}"> ${$value} </div> {{/each}} </div> <div class="tabBody"> {{html $item.html("div")[selectedTabIndex]}} </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Global for tracking selected tab var selectedTabIndex = 0; // Render the tab strip $("#tabsContent").tmpl().appendTo("#tabsView"); // When a tab is clicked, update the tab strip $("#tabsView") .delegate(".tabState_false", "click", function () { var templateItem = $.tmplItem(this); selectedTabIndex = $(this).index(); templateItem.update(); }); </script> </body> </html>   The “source” for the tab strip is contained in the following template: <script id="tabsContent" type="text/x-jquery-tmpl"> {{wrap "#tabsWrap"}} <h3>Tab 1</h3> <div> Content of tab 1. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 2</h3> <div> Content of tab 2. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> <h3>Tab 3</h3> <div> Content of tab 3. Lorem ipsum dolor <b>sit</b> amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna. </div> {{/wrap}} </script>   The tab strip is created with a list of H3 elements (which represent each tab) and DIV elements (which represent the body of each tab). Notice that the HTML content is wrapped in the {{wrap}} template tag. This template tag points at the following tabsWrap template: <script id="tabsWrap" type="text/x-jquery-tmpl"> <div class="tabs"> {{each $item.html("h3", true)}} <div class="tabState_${$index === selectedTabIndex}"> ${$value} </div> {{/each}} </div> <div class="tabBody"> {{html $item.html("div")[selectedTabIndex]}} </div> </script> The tabs DIV contains all of the tabs. The {{each}} template tag is used to loop through each of the H3 elements from the source template and render a DIV tag that represents a particular tab. The template item html() method is used to filter content from the “source” HTML template. The html() method accepts a jQuery selector for its first parameter. The tabs are retrieved from the source template by using an h3 filter. The second parameter passed to the html() method – the textOnly parameter -- causes the filter to return the inner text of each h3 element. You can learn more about the html() method at the jQuery website (see the section on $item.html()). The tabBody DIV renders the body of the selected tab. Notice that the {{html}} template tag is used to display the tab body so that HTML content in the body won’t be HTML encoded. The html() method is used, once again, to grab all of the DIV elements from the source HTML template. The selectedTabIndex global variable is used to display the contents of the selected tab. Remote Templates A common feature request for jQuery templates is support for remote templates. Developers want to be able to separate templates into different files. Adding support for remote templates requires only a few lines of extra code (Dave Ward has a nice blog entry on this). For example, the following page uses a remote template from a file named BookTemplate.htm: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Remote Templates</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg" }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg" }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg" }, ]; // Get the remote template $.get("BookTemplate.htm", null, function (bookTemplate) { // Render the books using the remote template $.tmpl(bookTemplate, books).appendTo("#bookContainer"); }); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   The remote template is retrieved (and rendered) with the following code: // Get the remote template $.get("BookTemplate.htm", null, function (bookTemplate) { // Render the books using the remote template $.tmpl(bookTemplate, books).appendTo("#bookContainer"); });   This code uses the standard jQuery $.get() method to get the BookTemplate.htm file from the server with an Ajax request. After the BookTemplate.htm file is successfully retrieved, the $.tmpl() method is used to render an array of books with the template. Here’s what the BookTemplate.htm file looks like: <div> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> Notice that the template in the BooksTemplate.htm file is not wrapped by a SCRIPT element. There is no need to wrap the template in this case because there is no possibility that the template will get interpreted before you want it to be interpreted. If you plan to use the bookTemplate multiple times – for example, you are paging or sorting the books -- then you should compile the template into a function and cache the compiled template function. For example, the following page can be used to page through a list of 100 products (using iPhone style More paging). <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Template Caching</title> <link href="6_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Products</h1> <div id="productContainer"></div> <button id="more">More</button> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Globals var pageIndex = 0; // Create an array of products var products = []; for (var i = 0; i < 100; i++) { products.push({ name: "Product " + (i + 1) }); } // Get the remote template $.get("ProductTemplate.htm", null, function (productTemplate) { // Compile and cache the template $.template("productTemplate", productTemplate); // Render the products renderProducts(0); }); $("#more").click(function () { pageIndex++; renderProducts(); }); function renderProducts() { // Get page of products var pageOfProducts = products.slice(pageIndex * 5, pageIndex * 5 + 5); // Used cached productTemplate to render products $.tmpl("productTemplate", pageOfProducts).appendTo("#productContainer"); } function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   The ProductTemplate is retrieved from an external file named ProductTemplate.htm. This template is retrieved only once. Furthermore, it is compiled and cached with the help of the $.template() method: // Get the remote template $.get("ProductTemplate.htm", null, function (productTemplate) { // Compile and cache the template $.template("productTemplate", productTemplate); // Render the products renderProducts(0); });   The $.template() method compiles the HTML representation of the template into a JavaScript function and caches the template function with the name productTemplate. The cached template can be used by calling the $.tmp() method. The productTemplate is used in the renderProducts() method: function renderProducts() { // Get page of products var pageOfProducts = products.slice(pageIndex * 5, pageIndex * 5 + 5); // Used cached productTemplate to render products $.tmpl("productTemplate", pageOfProducts).appendTo("#productContainer"); } In the code above, the first parameter passed to the $.tmpl() method is the name of a cached template. Working with Template Items In this final section, I want to devote some space to discussing Template Items. A new Template Item is created for each rendered instance of a template. For example, if you are displaying a list of 100 products with a template, then 100 Template Items are created. A Template Item has the following properties and methods: data – The data associated with the Template Instance. For example, a product. tmpl – The template associated with the Template Instance. parent – The parent template item if the template is nested. nodes – The HTML content of the template. calls – Used by {{wrap}} template tag. nest – Used by {{tmpl}} template tag. wrap – Used to imperatively enable wrapped templates. html – Used to filter content from a wrapped template. See the above section on wrapped templates. update – Used to re-render a template item. The last method – the update() method -- is especially interesting because it enables you to re-render a template item with new data or even a new template. For example, the following page displays a list of books. When you hover your mouse over any of the books, additional book details are displayed. In the following screenshot, details for ASP.NET Kick Start are displayed. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Template Item</title> <link href="0_Site.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="pageContent"> <h1>ASP.NET Bookstore</h1> <div id="bookContainer"></div> </div> <script id="bookTemplate" type="text/x-jQuery-tmpl"> <div class="bookItem"> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} </div> </script> <script id="bookDetailsTemplate" type="text/x-jQuery-tmpl"> <div class="bookItem"> <img src="BookPictures/${picture}" alt="" /> <h2>${title}</h2> price: ${formatPrice(price)} <p> ${description} </p> </div> </script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.js"></script> <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script> <script type="text/javascript"> // Create an array of books var books = [ { title: "ASP.NET 4 Unleashed", price: 37.79, picture: "AspNet4Unleashed.jpg", description: "The most comprehensive book on Microsoft’s new ASP.NET 4.. " }, { title: "ASP.NET MVC Unleashed", price: 44.99, picture: "AspNetMvcUnleashed.jpg", description: "Writing for professional programmers, Walther explains the crucial concepts that make the Model-View-Controller (MVC) development paradigm work…" }, { title: "ASP.NET Kick Start", price: 4.00, picture: "AspNetKickStart.jpg", description: "Visual Studio .NET is the premier development environment for creating .NET applications…." }, { title: "ASP.NET MVC Unleashed iPhone", price: 44.99, picture: "AspNetMvcUnleashedIPhone.jpg", description: "ASP.NET MVC Unleashed for the iPhone…" }, ]; // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer"); // Get compiled details template var bookDetailsTemplate = $("#bookDetailsTemplate").template(); // Add hover handler $(".bookItem").mouseenter(function () { // Get template item associated with DIV var templateItem = $(this).tmplItem(); // Change template to compiled template templateItem.tmpl = bookDetailsTemplate; // Re-render template templateItem.update(); }); function formatPrice(price) { return "$" + price.toFixed(2); } </script> </body> </html>   There are two templates used to display a book: bookTemplate and bookDetailsTemplate. When you hover your mouse over a template item, the standard bookTemplate is swapped out for the bookDetailsTemplate. The bookDetailsTemplate displays a book description. The books are rendered with the bookTemplate with the following line of code: // Render the books using the template $("#bookTemplate").tmpl(books).appendTo("#bookContainer");   The following code is used to swap the bookTemplate and the bookDetailsTemplate to show details for a book: // Get compiled details template var bookDetailsTemplate = $("#bookDetailsTemplate").template(); // Add hover handler $(".bookItem").mouseenter(function () { // Get template item associated with DIV var templateItem = $(this).tmplItem(); // Change template to compiled template templateItem.tmpl = bookDetailsTemplate; // Re-render template templateItem.update(); });   When you hover your mouse over a DIV element rendered by the bookTemplate, the mouseenter handler executes. First, this handler retrieves the Template Item associated with the DIV element by calling the tmplItem() method. The tmplItem() method returns a Template Item. Next, a new template is assigned to the Template Item. Notice that a compiled version of the bookDetailsTemplate is assigned to the Template Item’s tmpl property. The template is compiled earlier in the code by calling the template() method. Finally, the Template Item update() method is called to re-render the Template Item with the bookDetailsTemplate instead of the original bookTemplate. Summary This is a long blog entry and I still have not managed to cover all of the features of jQuery Templates J However, I’ve tried to cover the most important features of jQuery Templates such as template composition, template wrapping, and template items. To learn more about jQuery Templates, I recommend that you look at the documentation for jQuery Templates at the official jQuery website. Another great way to learn more about jQuery Templates is to look at the (unminified) source code.

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • Oracle Functional Testing Suite Advanced Pack for Oracle EBS Now Available

    - by Anne Carlson (Oracle Development)
    There’s new news about automated testing of E-Business Suite using the Oracle Application Testing Suite, a.k.a, “OATS”. E-Business Suite Development is pleased to announce the availability of the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite. The new pack, available with the latest release of Oracle Application Testing Suite (12.4.0.2), provides pre-built test components and flows to automate the in-depth testing of Oracle E-Business Suite applications. Designed for use with the Oracle Application Testing Suite and its Oracle Flow Builder capability, these pre-built components and flows can help Oracle E-Business Suite customers to significantly reduce the time and effort needed to create and maintain automated test scripts. The Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite is available now for EBS 12.1.3, and availability for EBS 12.2 is planned. Some Background on Automating Testing with Oracle Application Testing Suite and Oracle Flow Builder      Testing complex packaged applications like Oracle E-Business Suite can be time-consuming and challenging for organizations, hampering their ability to upgrade to latest releases or apply latest patches. Oracle Application Testing Suite offers organizations a unique and powerful testing platform for Oracle E-Business Suite and other Oracle applications. With the 12.3.0.1 release of Oracle Application Testing Suite, we introduced the Oracle Flow Builder testing framework and accompanying starter pack of pre-built test components and flows. The starter pack, which contains over 2000 components and 200 flows, provides broad coverage of commonly-used base functionality and is designed to jump-start the test automation effort. Using Oracle Flow Builder, even non-technical testers can create working test scripts using the pre-built components that Oracle provides. Each component represents an atomic test operation such as “create an invoice batch” or “apply an invoice hold.” Testers can assemble the pre-built components into test flows, and combine test flows with spreadsheet data to drive the testing of multiple data conditions. The Oracle Flow Builder framework allows customers to add, modify and extend the pre-built components to address new functionality and customizations of the Oracle E-Business Suite. Using Oracle Flow Builder’s component-based test generation framework instead of a traditional record/playback approach has allowed the EBS Quality Assurance team to reduce their test automation effort by 60%. E-Business Suite customers can significantly reduce their test automation effort using Oracle Application Testing Suite with Oracle Flow Builder and the pre-built test components and flows that Oracle provides. Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite Improves Test Coverage With the Oracle Application Testing Suite 12.4.0.2 and the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite, we are now delivering a significant number of additional test components and flows beyond those contained in the Oracle Flow Builder starter pack. These additional test components and flows provide 70-80% test coverage and enable the automation of detailed and complex test flows across the following Oracle E-Business Suite products: Oracle Asset Lifecycle Management Oracle Channel Revenue Management Oracle Discrete Manufacturing Oracle Incentive Compensation Oracle Lease and Finance Management Oracle Process Manufacturing Oracle Procurement Oracle Project Management Oracle Property Manager Oracle Service Downloads You can download the Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite from the Oracle Technology Network. References Oracle Applications Testing Suite YouTube: Oracle Flow Builder Training YouTube: Oracle Applications Testing Suite and Flow Builder Demonstration Oracle Functional Testing Suite Advanced Pack Readme for E-Business Suite, id=1905989.1">Note 1905989.1 Related Articles Automate Testing Using Oracle Application Testing Suite with Flow Builder for E-Business Suite EBS 12.1.1 Test Starter Kit Now Available for Oracle Applications Testing Suite Oracle Application Testing Suite 9.0 Supported with Oracle E-Business Suite Using the Oracle Application Testing Suite with EBS: Interim Update #1

    Read the article

  • Add Windows 7’s AeroSnap Feature to Vista and XP

    - by Asian Angel
    Are you using Windows Vista or XP and want that Windows 7 AeroSnap goodness on your own system? Then join us as we look at AeroSnap for Windows Vista and XP. Note: Requires .NET Framework 2.0 or higher (link provided at bottom of article). Setup What exactly does AeroSnap do you might ask…here is a quote directly from the website: “AeroSnap is a simple but powerful application that allows you to resize, arrange or maximize your desktop windows with just drag’n'drop. Simply drag a window to a side of your desktop to snap it or drag it to the top to maximize. When you drag it back to the last position, the last window size will be restored.” As soon as you have finished installing AeroSnap and started it for the first time the only item that will be visible is the “System Tray Icon”. Before going any further you should take a moment to view and make any desired adjustments in the “Options”. Note: AeroSnap works with multiple monitors. You may want to have AeroSnap start with Windows each time but the really nice setting to enable here is the “Snap Preview”. If you are using AeroSnap on Vista and have Aero enabled this will really be nice. The second portion may be of interest for those who would like to enable the keyboard shortcut function. One point worth noting about this screen is that the highest number of pixels from the screen’s edge that you can set AeroSnap for is 20 pixels. AeroSnap in Action AeroSnap is extremely easy to use…just grab the top of an app window and drag it to the left, right, or top of your screen. Since we installed this on Windows Vista we made certain to enable the “Snap Preview” in the “Options”.  We started off with dragging our Firefox 3.7 window towards the left…once we got close to the edge of the screen you can see that the left half of the screen temporarily “shaded over”. Note: The “Snap Preview” displays on the left and right movements but not the top movement. Releasing Firefox snapped it right into the “shaded over” part of the screen. The great thing about AeroSnap is that it is really easy to return the app window to it former size…all that you have to do is simply click on and grab the top portion of the app window. Moving Firefox towards the top of our screen and… It quickly snaps into filling the screen. One thing that we did notice is that the window did not “Maximize” as per the function for the button in the upper right corner. Dragging towards the right side now… And snap! Tucked in all nice and neat… You can minimize the app windows to the Taskbar and they will return to their previous “snap area” when “maximized” again. Conclusion If you have been wanting to add Windows 7’s AeroSnap goodness to your Vista and XP systems then you should definitely give this app a try. AeroSnap is very easy to set up and operate… Links Download AeroSnap for Windows Vista & XP Download the .NET Framework Similar Articles Productive Geek Tips Using Windows 7 or Vista System RestoreRoundup: 16 Tweaks to Windows Vista Look & FeelSelect Files using Check Boxes in Windows VistaSpeed up Your Windows Vista Computer with ReadyBoostHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

  • SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012

    - by pinaldave
    SQL Server 2012 RTM is just announced and recently I wrote about all the SQL Server 2012 Certification on single page. As a feedback, I received suggestions to have a single page where everything about SQL Server 2012 is listed. I will keep this page updated as new updates are announced. Microsoft SQL Server 2012 Evaluation Microsoft SQL Server 2012 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization. Microsoft SQL Server 2012 Express Microsoft SQL Server 2012 Express is a powerful and reliable free data management system that delivers a rich and reliable data store for lightweight Web Sites and desktop applications. Microsoft SQL Server 2012 Feature Pack The Microsoft SQL Server 2012 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012. Microsoft SQL Server 2012 Report Builder Report Builder provides a productive report-authoring environment for IT professionals and power users. It supports the full capabilities of SQL Server 2012 Reporting Services. Microsoft SQL Server 2012 Master Data Services Add-in For Microsoft Excel The Master Data Services Add-in for Excel gives multiple users the ability to update master data in a familiar tool without compromising the data’s integrity in Master Data Services. Microsoft SQL Server 2012 Performance Dashboard Reports The SQL Server 2012 Performance Dashboard Reports are Reporting Services report files designed to be used with the Custom Reports feature of SQL Server Management Studio. Microsoft SQL Server 2012 PowerPivot for Microsoft Excel® 2010 Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint. Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 The SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint 2010 technologies allows you to integrate your reporting environment with the collaborative SharePoint 2010 experience. Microsoft SQL Server 2012 Semantic Language Statistics The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics. Microsoft ®SQL Server 2012 FileStream Driver – Windows Logo Certification Catalog file for Microsoft SQL Server 2012 FileStream Driver that is certified for WindowsServer 2008 R2. It meets Microsoft standards for compatibility and recommended practices with the Windows Server 2008 R2 operating systems. Microsoft SQL Server StreamInsight 2.0 Microsoft StreamInsight is Microsoft’s Complex Event Processing technology to help businesses create event-driven applications and derive better insights by correlating event streams from multiple sources with near-zero latency. Microsoft JDBC Driver 4.0 for SQL Server Download the Microsoft JDBC Driver 4.0 for SQL Server, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and 6. Data Quality Services Performance Best Practices Guide This guide focuses on a set of best practices for optimizing performance of Data Quality Services (DQS). Microsoft Drivers 3.0 for SQL Server for PHP The Microsoft Drivers 3.0 for SQL Server for PHP provide connectivity to Microsoft SQLServer from PHP applications. Product Documentation for Microsoft SQL Server 2012 for firewall and proxy restricted environments The Microsoft SQL Server 2012 setup installs only the Help Viewer…install any documentation. All of the SQL Server documentation is available online. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS

    - by Thomas Leopold
      Tuesday, February 22, 2011 5:00 PM - 6:00 PM CET Free Webinar Re-Engineering Legacy Oracle Forms Migration from Forms to ADF - A Case Study Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration. Learn about various migration options, challenges and best practices to protect your current investment in Oracle Forms. PL/SQL is without match for what it does: manipulating data in the database. If you blindly migrate all your PL/SQL to Java you will, in all probability, end up with less maintainable and less efficient code. Instead you should consider which code it best left as PL/SQL..." Grant Ronald - "Migrating Oracle Forms to Fusion: Myth or Magic Bullet?" Re-Engineering existing business logic is mandatory for your legacy Forms application to take advantage of the new Software architectures like ADF. The PITSS.CON solution combines the deep understanding of Oracle Forms and Reports applications architecture with powerful re-engineering capabilities that allows the developer community to protect the investment in the existing Forms applications and to concentrate on fine-tuning and customization of the modernized functionality rather than manually recreating every module and business logic from bottom up. Registration: https://www2.gotomeeting.com/register/971702250   PITSS GmbHKönigsdorferstrasse 25D-82515 WolfratshausenDo not forget to check out these Free Webinars in March! Thursday, March 3, 2011 Upgrade and Modernize Your Application to Forms 11g Registration/Information Tuesday, March 15, 2011 Shaping the Future for your Oracle Forms Application:Forms 11g, ADF, APEX Registration/Information Tuesday, March 29, 2011 Oracle Forms Modernization to APEX Registration/Information Registration is limited, so sign up  today!Presented By:        Grant Ronald, Senior Group Product Manager,Oracle       Magdalena Serban, Product Manager,PITSS   Contact Us:            PITSS in Americas +1 248.740.0935 [email protected] www.pitssamerica.com       PITSS in Europe +49 (0) 717287 5200 [email protected] www.pitss.com   White Paper:      From Oracle Forms to Oracle ADF and JEE     © Copyright 2010 PITSS GmbH, Wolfratshausen, Stuttgart, München; Managing Directors: Dipl.-Ing. Andreas Gaede, Michael Kilimann, Dipl.-Ing. Dirk Fleischmann Commercial Register: HRB 125471 at District Court Munich. All rights reserved. Any duplication or further treatment in any medium, in parts or as a whole, requires a written agreement. If you do not want to receive invitations for events, meetings and seminars from us, then please click here.

    Read the article

  • asynchrony is viral

    - by Daniel Moth
    It is becoming hard to write code today without introducing some form of asynchrony and, if you are using .NET (e.g. for Windows Phone 8 or Windows Store apps), that means sooner or later you have to await something and mark your method as async. My most recent examples included introducing speech recognition in my Translator By Moth phone app where I had to await mySpeechRecognizerUI.RecognizeWithUIAsync() and when moving that code base to a Windows Store project just to show a MessageBox I had to await myMessageDialog.ShowAsync(). Any time you need to invoke an asynchronous method in your code, you have a choice to make: kick off the operation but don’t wait for it to complete (otherwise known as fire-and-forget), synchronously wait for it to complete (which will entail blocking, which can be bad, especially on a UI thread), or asynchronously wait for it to complete before continuing on with the rest of the method’s work. In most cases, you want the latter, and the await keyword makes that trivial to implement.  When you use the magical await keyword in front of an API call, then you typically have to make additional changes to your code: This await usage is within a method of course, and now you have to annotate that method with async. Furthermore, you have to change the return type of the method you just annotated so it returns a Task (if it previously returned void), or Task<myOldReturnType> (if it previously returned myOldReturnType). Note that if it returns void, in some cases you could cheat and stop there. Furthermore, any method that called this method you just annotated with async will now also be invoking an asynchronous operation, so you have to make that change in the body of the caller method to introduce the await keyword before the call to the method. …you guessed it, you now have to change this caller method to be annotated with async and have its return types tweaked... …and it goes on virally… At some point you reach the root of your user code, e.g. a GUI event handler, and whoever calls that void method can already deal with the fact that you marked it as async and the viral introduction of the keywords stops there… This is all wonderful progress and a very powerful mechanism, and I just wish someone had written a refactoring tool to take care of this… anyone? I mentioned earlier that you have a choice when invoking an asynchronous operation. If the first time you encounter this you wish to localize the impact of all these changes and essentially try to turn the asynchronous behavior into synchronous by blocking - don't! For reasons why you don't want to do that, read Toub's excellent blog post (and check out the rest of his blog with gems on async programming starting with the Async FAQ). Just embrace the pattern knowing that when you use one instance of an await, you'll propagate the change all the way to the root user code method, e.g. typically an event handler. Related aside: I just finished re-writing my MessageBox wrapper class for Phone projects, including making it work in Windows Store projects, and it does expect you to use it with an await :-). I'll share that in an upcoming post for those of you that have the same need… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >