Search Results

Search found 9916 results on 397 pages for 'counting sort'.

Page 96/397 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • One Apache server, multiple clients - best practices for config files?

    - by OttaSean
    First time user; please be gentle. :-) (And if you don't like my question I'd be grateful for a comment as to why...) I am doing a contract at a government server shop that provides web services for multiple client groups in other areas of the government. My employer has asked me to look into how other shops, in similar situations, handle configuration files, and whether there are any best practices on the subject. I'm pretty sure there are lots of installations out there running multiple VirtualHosts out of one Apache installation, but surprisingly I couldn't find anything online about how people handle config file layout, so was hoping some of you wise folks on ServerFault might have some thoughts or pointers for me. The current setup - which seems logical to me - is that each client site has its own directory off the root - so: /client/tps-reports/ /client/silly-walks/ /client/ministry-of-magic/ and so on - and each of those directories has a /htdocs, /cgi-bin, and /conf (among others). The main /etc/apache/httpd.conf only contains Include statements (and lots of comments), the last of which is: Include /etc/apache/vhosts/*.conf The vhosts directory contains symlinks: tpsrept.conf - /client/tps-reports/conf/tpsrept.conf sillywk.conf - /client/silly-walks/conf/sillywk.conf mom.conf - /client/ministry-of-magic/mom.conf Each of those .conf files contains the actual NameVirtualHost definition and a gigantic <VirtualHost 192.168.12.34> stanza - which contains all the stuff about the specific site. The idea is that clients have access to what's in their own /client/xx directory, so they can change stuff in the section of the config that is relevant to them. As I mentioned above, that seems fairly logical to me, but I'm wondering if any of you wise folks are aware of potential gotchas with this sort of layout, or any other thoughts on why it is or isn't a good idea. In particular, how do other places do it? Is there a "best practice" for this sort of thing? Many thanks in advance for your time and any thoughts you all might have.

    Read the article

  • Convert Java program to C

    - by imicrothinking
    I need a bit of guidance with writing a C program...a bit of quick background as to my level, I've programmed in Java previously, but this is my first time programming in C, and we've been tasked to translate a word count program from Java to C that consists of the following: Read a file from memory Count the words in the file For each occurrence of a unique word, keep a word counter variable Print out the top ten most frequent words and their corresponding occurrences Here's the source program in Java: package lab0; import java.io.File; import java.io.FileReader; import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; public class WordCount { private ArrayList<WordCountNode> outputlist = null; public WordCount(){ this.outputlist = new ArrayList<WordCountNode>(); } /** * Read the file into memory. * * @param filename name of the file. * @return content of the file. * @throws Exception if the file is too large or other file related exception. */ public char[] readFile(String filename) throws Exception{ char [] result = null; File file = new File(filename); long size = file.length(); if (size > Integer.MAX_VALUE){ throw new Exception("File is too large"); } result = new char[(int)size]; FileReader reader = new FileReader(file); int len, offset = 0, size2read = (int)size; while(size2read > 0){ len = reader.read(result, offset, size2read); if(len == -1) break; size2read -= len; offset += len; } return result; } /** * Make article word by word. * * @param article the content of file to be counted. * @return string contains only letters and "'". */ private enum SPLIT_STATE {IN_WORD, NOT_IN_WORD}; /** * Go through article, find all the words and add to output list * with their count. * * @param article the content of the file to be counted. * @return words in the file and their counts. */ public ArrayList<WordCountNode> countWords(char[] article){ SPLIT_STATE state = SPLIT_STATE.NOT_IN_WORD; if(null == article) return null; char curr_ltr; int curr_start = 0; for(int i = 0; i < article.length; i++){ curr_ltr = Character.toUpperCase( article[i]); if(state == SPLIT_STATE.IN_WORD){ article[i] = curr_ltr; if ((curr_ltr < 'A' || curr_ltr > 'Z') && curr_ltr != '\'') { article[i] = ' '; //printf("\nthe word is %s\n\n",curr_start); if(i - curr_start < 0){ System.out.println("i = " + i + " curr_start = " + curr_start); } addWord(new String(article, curr_start, i-curr_start)); state = SPLIT_STATE.NOT_IN_WORD; } }else{ if (curr_ltr >= 'A' && curr_ltr <= 'Z') { curr_start = i; article[i] = curr_ltr; state = SPLIT_STATE.IN_WORD; } } } return outputlist; } /** * Add the word to output list. */ public void addWord(String word){ int pos = dobsearch(word); if(pos >= outputlist.size()){ outputlist.add(new WordCountNode(1L, word)); }else{ WordCountNode tmp = outputlist.get(pos); if(tmp.getWord().compareTo(word) == 0){ tmp.setCount(tmp.getCount() + 1); }else{ outputlist.add(pos, new WordCountNode(1L, word)); } } } /** * Search the output list and return the position to put word. * @param word is the word to be put into output list. * @return position in the output list to insert the word. */ public int dobsearch(String word){ int cmp, high = outputlist.size(), low = -1, next; // Binary search the array to find the key while (high - low > 1) { next = (high + low) / 2; // all in upper case cmp = word.compareTo((outputlist.get(next)).getWord()); if (cmp == 0) return next; else if (cmp < 0) high = next; else low = next; } return high; } public static void main(String args[]){ // handle input if (args.length == 0){ System.out.println("USAGE: WordCount <filename> [Top # of results to display]\n"); System.exit(1); } String filename = args[0]; int dispnum; try{ dispnum = Integer.parseInt(args[1]); }catch(Exception e){ dispnum = 10; } long start_time = Calendar.getInstance().getTimeInMillis(); WordCount wordcount = new WordCount(); System.out.println("Wordcount: Running..."); // read file char[] input = null; try { input = wordcount.readFile(filename); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } // count all word ArrayList<WordCountNode> result = wordcount.countWords(input); long end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordcount: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); System.out.println("wordsort: running ..."); start_time = Calendar.getInstance().getTimeInMillis(); Collections.sort(result); end_time = Calendar.getInstance().getTimeInMillis(); System.out.println("wordsort: completed " + (end_time - start_time)/1000000 + "." + (end_time - start_time)%1000000 + "(s)"); Collections.reverse(result); System.out.println("\nresults (TOP "+ dispnum +" from "+ result.size() +"):\n" ); // print out result String str ; for (int i = 0; i < result.size() && i < dispnum; i++){ if(result.get(i).getWord().length() > 15) str = result.get(i).getWord().substring(0, 14); else str = result.get(i).getWord(); System.out.println(str + " - " + result.get(i).getCount()); } } public class WordCountNode implements Comparable{ private String word; private long count; public WordCountNode(long count, String word){ this.count = count; this.word = word; } public String getWord() { return word; } public void setWord(String word) { this.word = word; } public long getCount() { return count; } public void setCount(long count) { this.count = count; } public int compareTo(Object arg0) { // TODO Auto-generated method stub WordCountNode obj = (WordCountNode)arg0; if( count - obj.getCount() < 0) return -1; else if( count - obj.getCount() == 0) return 0; else return 1; } } } Here's my attempt (so far) in C: #include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <string.h> // Read in a file FILE *readFile (char filename[]) { FILE *inputFile; inputFile = fopen (filename, "r"); if (inputFile == NULL) { printf ("File could not be opened.\n"); exit (EXIT_FAILURE); } return inputFile; } // Return number of words in an array int wordCount (FILE *filePointer, char filename[]) {//, char *words[]) { // count words int count = 0; char temp; while ((temp = getc(filePointer)) != EOF) { //printf ("%c", temp); if ((temp == ' ' || temp == '\n') && (temp != '\'')) count++; } count += 1; // counting method uses space AFTER last character in word - the last space // of the last character isn't counted - off by one error // close file fclose (filePointer); return count; } // Print out the frequencies of the 10 most frequent words in the console int main (int argc, char *argv[]) { /* Step 1: Read in file and check for errors */ FILE *filePointer; filePointer = readFile (argv[1]); /* Step 2: Do a word count to prep for array size */ int count = wordCount (filePointer, argv[1]); printf ("Number of words is: %i\n", count); /* Step 3: Create a 2D array to store words in the file */ // open file to reset marker to beginning of file filePointer = fopen (argv[1], "r"); // store words in character array (each element in array = consecutive word) char allWords[count][100]; // 100 is an arbitrary size - max length of word int i,j; char temp; for (i = 0; i < count; i++) { for (j = 0; j < 100; j++) { // labels are used with goto statements, not loops in C temp = getc(filePointer); if ((temp == ' ' || temp == '\n' || temp == EOF) && (temp != '\'') ) { allWords[i][j] = '\0'; break; } else { allWords[i][j] = temp; } printf ("%c", allWords[i][j]); } printf ("\n"); } // close file fclose (filePointer); /* Step 4: Use a simple selection sort algorithm to sort 2D char array */ // PStep 1: Compare two char arrays, and if // (a) c1 > c2, return 2 // (b) c1 == c2, return 1 // (c) c1 < c2, return 0 qsort(allWords, count, sizeof(char[][]), pstrcmp); /* int k = 0, l = 0, m = 0; char currentMax, comparedElement; int max; // the largest element in the current 2D array int elementToSort = 0; // elementToSort determines the element to swap with starting from the left // Outer a iterates through number of swaps needed for (k = 0; k < count - 1; k++) { // times of swaps max = k; // max element set to k // Inner b iterates through successive elements to fish out the largest element for (m = k + 1; m < count - k; m++) { currentMax = allWords[k][l]; comparedElement = allWords[m][l]; // Inner c iterates through successive chars to set the max vars to the largest for (l = 0; (currentMax != '\0' || comparedElement != '\0'); l++) { if (currentMax > comparedElement) break; else if (currentMax < comparedElement) { max = m; currentMax = allWords[m][l]; break; } else if (currentMax == comparedElement) continue; } } // After max (count and string) is determined, perform swap with temp variable char swapTemp[1][20]; int y = 0; do { swapTemp[0][y] = allWords[elementToSort][y]; allWords[elementToSort][y] = allWords[max][y]; allWords[max][y] = swapTemp[0][y]; } while (swapTemp[0][y++] != '\0'); elementToSort++; } */ int a, b; for (a = 0; a < count; a++) { for (b = 0; (temp = allWords[a][b]) != '\0'; b++) { printf ("%c", temp); } printf ("\n"); } // Copy rows to different array and print results /* char arrayCopy [count][20]; int ac, ad; char tempa; for (ac = 0; ac < count; ac++) { for (ad = 0; (tempa = allWords[ac][ad]) != '\0'; ad++) { arrayCopy[ac][ad] = tempa; printf("%c", arrayCopy[ac][ad]); } printf("\n"); } */ /* Step 5: Create two additional arrays: (a) One in which each element contains unique words from char array (b) One which holds the count for the corresponding word in the other array */ /* Step 6: Sort the count array in decreasing order, and print the corresponding array element as well as word count in the console */ return 0; } // Perform housekeeping tasks like freeing up memory and closing file I'm really stuck on the selection sort algorithm. I'm currently using 2D arrays to represent strings, and that worked out fine, but when it came to sorting, using three level nested loops didn't seem to work, I tried to use qsort instead, but I don't fully understand that function as well. Constructive feedback and criticism greatly welcome (...and needed)!

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • I finished coding my program. What's next? what are the steps I should take that would enable me to

    - by Luay
    Hi, I finished developing a program and would like to sell it online. However, I am not really sure of what to do next. Here is my current plan (in-order): 1- Add a 'deployment' project (i'm using visual studio) to my project so I can create a setup file for my program. 2- Use visual studio 'testing' add ons to test the program. I have no idea how to do it, but will teach myself. 3- When all is done, install the program on my wife's, parents and in-laws computer to further test it under different environments. 3- Setup a small Ltd. company. ( I might start this earlier as it might take a few weeks to complete). 4- Build / finish building a website (actually I started on this step a few weeks ago but haven't devoted enough time for it to complete yet). 5- Find a software / service to protect my program from piracy. I know it is impossible to protect the program from piracy. I understand that fully. But I still want to implement some sort of solution that wouldn't harm or deter honest customers. 6- Set up a business paypal account. 7- Find a software / service to handle payments on my website 8- Find a software / service to handle issuing license codes 9- Set up all of the above and go 'live'. However, I have zero experience in this sort of thing and would like your guidance on some points. 1- Is it necessary to setup a company? Can I sell as an individual? Will selling as an individual deter persons or companies from buying my software? 2- What is a decent choice for as a software / service to protect my program from my piracy. I have done a quick search and found something called Quick License Manager by Interactive Studios. Is this the sort of thing I am looking for? Is it any good? At which stage do I use or implement their service (or any similar service you might point me too? I guess I am really asking: how does this work? Would they give me a file I just add to my project, rebuild it and that's it? 3- What should I implement to handle payments online and how complicated is it? could you point me to any such services, please? 4- What should I implement to handle issuing license codes and how will that software / service coordinate with the software / service that handles the anti-piracy stuff? could you point me to such services, please? 5- In a different Stack Overflow question (by another user) someone suggested a cms system called Software Droid (http://www.softwaredroid.com). Is this what I am after. Will it help? 6- If the steps i'm following are incorrect in order or logic could you please advise me on what I should actually do? I guess these are question for those of you who have 'been there and done that'. So any guidance will be vary much appreciated. Many thanks in advance for your help.

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • Problem with jQuery plugin TinySort

    - by Volmar
    I'm trying to sort a list with the help of jQuery and the TinySort-plugin, and it works good but one thing is not working as i want. My Code is: <!doctype html> <html> <head> <meta charset="UTF-8" /> <title>TinySort problem</title> <script type="text/javascript" src="http://so.volmar.se/www/js/jquery.js"></script> <script type="text/javascript" src="http://so.volmar.se/www/js/jquery.tinysort.min.js"></script> <script type="text/javascript"> function pktsort(way){ if($("div#paket>ul>li.sortdiv>a#s_abc").text() == "A-S"){ $("div#paket>ul>li.sortdiv>a#s_abc").text("S-A"); $("div#paket ul li.sortable").tsort("",{place:"org",returns:true,order:"desc"}); }else{ $("div#paket>ul>li.sortdiv>a#s_abc").text("A-S"); $("div#paket ul li.sortable").tsort("",{place:"org",returns:true,order:"asc"}); } } </script> </head> <body> <div id="paket" title="Paket"> <ul class="rounded"> <li class="sortdiv">Sort: <a href="#" onclick="pktsort();" class="active_sort" id="s_abc">A-S</a></li> <li class="sortable">Almost Famous</li> <li class="sortable">Children of Men</li> <li class="sortable">Coeurs</li> <li class="sortable">Colossal Youth</li> <li class="sortable">Demonlover</li> <li class="sortable">Femme Fatale</li> <li class="sortable">I'm Not There</li> <li class="sortable">In the City of Sylvia</li> <li class="sortable">Into the Wild</li> <li class="sortable">Je rentre à la maison</li> <li class="sortable">King Kong</li> <li class="sortable">Little Miss Sunshine</li> <li class="sortable">Man on Wire</li> <li class="sortable">Milk</li> <li class="sortable">Monsters Inc.</li> <li class="sortable">My Winnipeg</li> <li class="sortable">Ne touchez pas la hache</li> <li class="sortable">Nói albinói</li> <li class="sortable">Regular Lovers</li> <li class="sortable">Shaun of the Dead</li> <li class="sortable">Silent Light</li> <li class="addmore"><b>This text is not supposed to move</b></li> </ul> </div> </body> </html> you can try it out at: http://www.volmar.se/list-prob.html MY PROBLEM IS: I don't want the <li class="addmore"> to move above all the <li class="sortable">-elements when i press the sort-link. i wan't it to always be in the bottom. you can find documentation of the TinySort plugin here. i've tried loads of combinations with place and returns propertys but i just can't get it right.

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • Why is processing a sorted array faster than an unsorted array?

    - by GManNickG
    Here is a piece of code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x: #include <algorithm> #include <ctime> #include <iostream> int main() { // generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! with this, the next loop runs faster std::sort(data, data + arraySize); // test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. Initially I thought this might be just a language or compiler anomaly. So I tried it Java... import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! with this, the next loop runs faster Arrays.sort(data); // test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { // primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } with a similar but less extreme result. My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated. What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

    Read the article

  • AJAX Generated Select Won't Redirect

    - by James
    So, basically I have this select / drop down menu that I use AJAX to retrieve and create, though when an option is selected (so onChange) I want it to redirect! Though, this still isn't working, I don't get any errors thrown when trying, and tried to do alert() debug methods yet the alerts don't get called. jquery $("#schoolMenu").change(function() { option = $("#schoolMenu option:selected").text(); alert(option); if(option != "- Please Select -") { window.location = "http://www.itmustbecollege.com/pics/pics-from-"+$("#schoolMenu option:selected").val(); } }); This is what is used to call the AJAX // // Populate Schools // $("#state").change(function() { option = $("#state option:selected").text(); if($(this).attr("class") == "menu") { if(option != "- Please Select -") { $.ajax({ type: "GET", url: "/includes/functions.php", data: "f=school&p="+option+"&m=yes", success: function(msg) { $("#fSchool").html("<p style=\"margin-left: 20px;\">Select School:</p>"+ msg); $("#fSchool").show(); $("#school").focus(); } }); } else { $("#fSchool").html(""); $("#fSchool").hide(); } } else { if(option != "- Please Select -") { $.ajax({ type: "GET", url: "/includes/functions.php", data: "f=school&p="+option, success: function(msg) { $("#fSchool").html(msg); $("#fSchool").show(); $("#school").focus(); } }); } else { $("#fSchool").html(""); $("#fSchool").hide(); } } }); It loads perfectly, if you look at http://www.itmustbecollege.com/quotes/ on that bar where you can do "sort by" of Popular | Newest | Category | School if you hover over school a dropdown comes up, select any country and another one appears, though when that is changed nothing happens. here is the PHP for that second drop down // Get College List function getCollege($state, $m = "no", $l = "no") { // Displays Schools if($m == "yes") { $options = '<select id="schoolMenu" name="school"><option value="select" selected="selected">- Please Select -</option>'; } else if($l == "yes" || $l == "yes2") { $options = ''; } else { $options = '<select name="school"><option value="select" selected="selected">- Please Select -</option>'; } $listArray = split("\|\\\\", $list); for($i=0; $i < count($listArray); $i++) { if($m == "yes") { $options .= '<option value="'. trim(titleReplace($listArray[$i])) .'">'. trim($listArray[$i]) .'</option>'; } else if($l == "yes") { $options .= '<li><a href="/quotes/quotes-from-'. titleReplace($listArray[$i]) .'" title="'. trim($listArray[$i]) .' Quotes">'. trim($listArray[$i]) .'</a></li>'; } else if($l == "yes2") { $options .= '<li><a href="/pics/pics-from-'. titleReplace($listArray[$i]) .'" title="'. trim($listArray[$i]) .' Pictures">'. trim($listArray[$i]) .'</a></li>'; } else { $options .= '<option value="'. trim($listArray[$i]) .'">'. trim($listArray[$i]) .'</option>'; } } echo $options .='</select>'; return false; } any help would be great! EDIT: Also, the way I have those drop downs coming for the menus is a bit weird and when you hover over any other "sort by" link they disappear, this is a problem with the "sort by school" because the first select box shows the list up, and if you go and select a school then somehow float over another link it disappears, any help on how to delay that or fix that minor problem?

    Read the article

  • Sorting an XML file through XSL

    - by jbugeja
    I have an XML file that I want to sort by an attribute. The file is structured as shown below: <wb xmlns:cf="http://www.macromedia.com/2004/cfform"> <a:form name="chart"> <a:input FIELDNUMBER="09" INDEX="2" LEFT="200" /> <a:input FIELDNUMBER="08" INDEX="3" LEFT="200" /> <a:fieldset FIELD="a" FIELDNAME="FieldSet1"> <a:input FIELDNUMBER="02" INDEX="4" LEFT="200" /> <a:select1 FIELDNUMBER="01" /> </a:fieldset> <a:fieldset FIELD="b" FIELDNAME="FieldSet1"> <a:input FIELDNUMBER="04" INDEX="7" LEFT="200" /> <a:select1 FIELDNUMBER="03" /> <a:fieldset FIELD="c" FIELDNAME="FieldSet1"> <a:input FIELDNUMBER="06" INDEX="8" LEFT="200" /> <a:input FIELDNUMBER="05" INDEX="6" LEFT="200" /> </a:fieldset> </a:fieldset> </a:form> </wb> I would like to sort the above XML all throughout by @fieldnumber, but at the same I want to keep the same structure of the XML. I have managed to sort other XML file but they did not have such nesting levels. Is this possible with XSL alone and if so how can this be done? The output should be as follows: <wb xmlns:cf="http://www.macromedia.com/2004/cfform"> <a:form name="chart"> <a:input FIELDNUMBER="08" INDEX="3" LEFT="200" /> <a:input FIELDNUMBER="09" INDEX="2" LEFT="200" /> <a:fieldset FIELD="a" FIELDNAME="FieldSet1"> <a:select1 FIELDNUMBER="01" /> <a:input FIELDNUMBER="02" INDEX="4" LEFT="200" /> </a:fieldset> <a:fieldset FIELD="b" FIELDNAME="FieldSet1"> <a:select1 FIELDNUMBER="03" /> <a:input FIELDNUMBER="04" INDEX="7" LEFT="200" /> <a:fieldset FIELD="c" FIELDNAME="FieldSet1"> <a:input FIELDNUMBER="05" INDEX="6" LEFT="200" /> <a:input FIELDNUMBER="06" INDEX="8" LEFT="200" /> </a:fieldset> </a:fieldset> </a:form> </wb> As another example, should the FIELDNUMBER 04 be changed to a value greater than 7 such as 10 (let's assume 10 in this example) then the output of the fieldset with FIELD value b becomes: <a:fieldset FIELD="b" FIELDNAME="FieldSet1"> <a:select1 FIELDNUMBER="03" /> <a:fieldset FIELD="c" FIELDNAME="FieldSet1"> <a:input FIELDNUMBER="05" INDEX="6" LEFT="200" /> <a:input FIELDNUMBER="06" INDEX="8" LEFT="200" /> </a:fieldset> <a:input FIELDNUMBER="10" INDEX="7" LEFT="200" /> </a:fieldset>

    Read the article

  • Can I retrieve objects from a complex query that limits results to fields from a single table?

    - by Sean Redmond
    I have a model whose rows I always want to sort based on the values in another associated model and I was thinking that the way to implement this would be to use set_dataset in the model. This is causing query results to be returned as hashes rather than objects, though, so none of the methods from the class can be used when iterating over the dataset. I basically have two classes class SortFields < Sequel::Model(:sort_fields) set_primary_key :objectid end class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid end Some backstory: the data is imported from a legacy system into mysql. The values in sort_fields are calculated from multiple other associated tables (some one-to-many, some many-to-many) according to some complicated rules. The likely solution will be to just add the values in sort_fields to items (I want to keep the imported data separate from the calculated data, but I don't have to). First, though, I just want to understand how far you can go with a dataset and still get objects rather than hashes. If I set the dataset to sort on a field in items like so class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset(order(:sortnumber)) end then the expected clause is added to the generated SQL, e.g.: >> Items.limit(1).sql => "SELECT * FROM `items` ORDER BY `sortnumber` LIMIT 1" and queries still return objects: >> Items.limit(1).first.class => Items If I order it by the associated fields though... class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). order(:sort1, :sort2, :sort3) ) end ...I get hashes ?> Items.limit(1).first.class => Hash My first thought was that this happens because all fields from sort_fields are included in the results and maybe if selected only the fields from items I would get Items objects again: class Items < Sequel::Model(:items) set_primary_key :objectid one_to_one :sort_fields, :class => SortFields, :key => :objectid set_dataset( eager_graph(:sort_fields). select(:items.*). order(:sort1, :sort2, :sort3) ) end The generated SQL is what I would expect: >> Items.limit(1).sql => "SELECT `items`.* FROM `items` LEFT OUTER JOIN `sort_fields` ON (`sort_fields`.`objectid` = `items`.`objectid`) ORDER BY `sort1`, `sort2`, `sort3` LIMIT 1" It returns the same rows as the set_dataset(order(:sortnumber)) version but it still doesn't work: >> Items.limit(1).first.class => Hash Before I add the sort fields to the items table so that they can all live happily in the same model, is there a way to tell Sequel to return on object when it wants to return a hash?

    Read the article

  • JavaScript Optimisation

    - by Jayie
    I am using JavaScript to work out all the combinations of badminton doubles matches from a given list of players. Each player teams up with everyone else. EG. If I have the following players a, b, c & d. Their combinations can be: a & b V c & d a & c V b & d a & d V b & c I am using the code below, which I wrote to do the job, but it's a little inefficient. It loops through the PLAYERS array 4 times finding every single combination (including impossible ones). It then sorts the game out into alphabetical order and stores it in the GAMES array if it doesn't already exist. I can then use the first half of the GAMES array to list all game combinations. The trouble is if I have any more than 8 players it runs really slowly because the combination growth is exponential. Does anyone know a better way or algorithm I could use? The more I think about it the more my brain hurts! var PLAYERS = ["a", "b", "c", "d", "e", "f", "g"]; var GAMES = []; var p1, p2, p3, p4, i1, i2, i3, i4, entry, found, i; var pos = 0; var TEAM1 = []; var TEAM2 = []; // loop through players 4 times to get all combinations for (i1 = 0; i1 < PLAYERS.length; i1++) { p1 = PLAYERS[i1]; for (i2 = 0; i2 < PLAYERS.length; i2++) { p2 = PLAYERS[i2]; for (i3 = 0; i3 < PLAYERS.length; i3++) { p3 = PLAYERS[i3]; for (i4 = 0; i4 < PLAYERS.length; i4++) { p4 = PLAYERS[i4]; if ((p1 != p2 && p1 != p3 && p1 != p4) && (p2 != p1 && p2 != p3 && p2 != p4) && (p3 != p1 && p3 != p2 && p3 != p4) && (p4 != p1 && p4 != p2 && p4 != p3)) { // sort teams into alphabetical order (so we can compare them easily later) TEAM1[0] = p1; TEAM1[1] = p2; TEAM2[0] = p3; TEAM2[1] = p4; TEAM1.sort(); TEAM2.sort(); // work out the game and search the array to see if it already exists entry = TEAM1[0] + " & " + TEAM1[1] + " v " + TEAM2[0] + " & " + TEAM2[1]; found = false; for (i=0; i < GAMES.length; i++) { if (entry == GAMES[i]) found = true; } // if the game is unique then store it if (!found) { GAMES[pos] = entry; document.write((pos+1) + ": " + GAMES[pos] + "<br>"); pos++; } } } } } } Thanks in advance. Jason.

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Best way to detect integer overflow in C/C++

    - by Chris Johnson
    I was writing a program in C++ to find all solutions of a^b = c (a to the power of b), where a, b and c together use all the digits 0-9 exactly once. The program looped over values of a and b, and ran a digit-counting routine each time on a, b and a^b to check if the digits condition was satisfied. However, spurious solutions can be generated when a^b overflows the integer limit. I ended up checking for this using code like: unsigned long b, c, c_test; ... c_test=c*b; // Possible overflow if (c_test/b != c) {/* There has been an overflow*/} else c=c_test; // No overflow Is there a better way of testing for overflow? I know that some chips have an internal flag that is set when overflow occurs, but I've never seen it accessed through C or C++.

    Read the article

  • Social Game Mechanics in Django

    - by oliland
    I want users to receive 'points' for completing various tasks in my application - ranging from tasks such as tagging objects to making friends. I havn't yet found a Django application that simplifies this. At the moment I'm thinking that the best way to accumulate points is that each user action creates the equivalent of a "stream item", and the points are calculated through counting the value of each action published to their stream. Obviously social game mechanics is a huge area with a lot of research going on at the moment. But from a development perspective what's the easiest way to get started? Am I on the wrong track or are there better / simpler ways?

    Read the article

  • Delphi Phrase Count / Keyword Density

    - by Brad
    Does anyone know how to or have some code on counting the number of unique phrases in a document? (Single word, two word phrases, three word phrases). Thanks Example of what I'm looking for: What I mean is I have a text document, and i need to see what the most popular word phrases are. Example text I took the car to the car wash. I : 1 took : 1 the : 2 car: 2 to : 1 wash : 1 I took : 1 took the : 1 the car : 2 car to : 1 to the : 1 car wash : 1 I took the : 1 took the car : 1 the car to : 1 car to the : 1 to the car : 1 the car wash : 1 I took the car to : 1 took the car to the : 1 the car to the car : 1 car to the car wash : 1 I need the phrase, and the count that it shows up. Any help would be appreciated. The closet thing I found to this was a PHP script from http://tools.seobook.com/general/keyword-density/source.php I used to have some code for this, but I cannot find it.

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • How to count each digit in a range of integers?

    - by Carlos Gutiérrez
    Imagine you sell those metallic digits used to number houses, locker doors, hotel rooms, etc. You need to find how many of each digit to ship when your customer needs to number doors/houses: 1 to 100 51 to 300 1 to 2,000 with zeros to the left The obvious solution is to do a loop from the first to the last number, convert the counter to a string with or without zeros to the left, extract each digit and use it as an index to increment an array of 10 integers. I wonder if there is a better way to solve this, without having to loop through the entire integers range. Solutions in any language or pseudocode are welcome. Edit: Answers review John at CashCommons and Wayne Conrad comment that my current approach is good and fast enough. Let me use a silly analogy: If you were given the task of counting the squares in a chess board in less than 1 minute, you could finish the task by counting the squares one by one, but a better solution is to count the sides and do a multiplication, because you later may be asked to count the tiles in a building. Alex Reisner points to a very interesting mathematical law that, unfortunately, doesn’t seem to be relevant to this problem. Andres suggests the same algorithm I’m using, but extracting digits with %10 operations instead of substrings. John at CashCommons and phord propose pre-calculating the digits required and storing them in a lookup table or, for raw speed, an array. This could be a good solution if we had an absolute, unmovable, set in stone, maximum integer value. I’ve never seen one of those. High-Performance Mark and strainer computed the needed digits for various ranges. The result for one millon seems to indicate there is a proportion, but the results for other number show different proportions. strainer found some formulas that may be used to count digit for number which are a power of ten. Robert Harvey had a very interesting experience posting the question at MathOverflow. One of the math guys wrote a solution using mathematical notation. Aaronaught developed and tested a solution using mathematics. After posting it he reviewed the formulas originated from Math Overflow and found a flaw in it (point to Stackoverflow :). noahlavine developed an algorithm and presented it in pseudocode. A new solution After reading all the answers, and doing some experiments, I found that for a range of integer from 1 to 10n-1: For digits 1 to 9, n*10(n-1) pieces are needed For digit 0, if not using leading zeros, n*10n-1 - ((10n-1) / 9) are needed For digit 0, if using leading zeros, n*10n-1 - n are needed The first formula was found by strainer (and probably by others), and I found the other two by trial and error (but they may be included in other answers). For example, if n = 6, range is 1 to 999,999: For digits 1 to 9 we need 6*105 = 600,000 of each one For digit 0, without leading zeros, we need 6*105 – (106-1)/9 = 600,000 - 111,111 = 488,889 For digit 0, with leading zeros, we need 6*105 – 6 = 599,994 These numbers can be checked using High-Performance Mark results. Using these formulas, I improved the original algorithm. It still loops from the first to the last number in the range of integers, but, if it finds a number which is a power of ten, it uses the formulas to add to the digits count the quantity for a full range of 1 to 9 or 1 to 99 or 1 to 999 etc. Here's the algorithm in pseudocode: integer First,Last //First and last number in the range integer Number //Current number in the loop integer Power //Power is the n in 10^n in the formulas integer Nines //Nines is the resut of 10^n - 1, 10^5 - 1 = 99999 integer Prefix //First digits in a number. For 14,200, prefix is 142 array 0..9 Digits //Will hold the count for all the digits FOR Number = First TO Last CALL TallyDigitsForOneNumber WITH Number,1 //Tally the count of each digit //in the number, increment by 1 //Start of optimization. Comments are for Number = 1,000 and Last = 8,000. Power = Zeros at the end of number //For 1,000, Power = 3 IF Power 0 //The number ends in 0 00 000 etc Nines = 10^Power-1 //Nines = 10^3 - 1 = 1000 - 1 = 999 IF Number+Nines <= Last //If 1,000+999 < 8,000, add a full set Digits[0-9] += Power*10^(Power-1) //Add 3*10^(3-1) = 300 to digits 0 to 9 Digits[0] -= -Power //Adjust digit 0 (leading zeros formula) Prefix = First digits of Number //For 1000, prefix is 1 CALL TallyDigitsForOneNumber WITH Prefix,Nines //Tally the count of each //digit in prefix, //increment by 999 Number += Nines //Increment the loop counter 999 cycles ENDIF ENDIF //End of optimization ENDFOR SUBROUTINE TallyDigitsForOneNumber PARAMS Number,Count REPEAT Digits [ Number % 10 ] += Count Number = Number / 10 UNTIL Number = 0 For example, for range 786 to 3,021, the counter will be incremented: By 1 from 786 to 790 (5 cycles) By 9 from 790 to 799 (1 cycle) By 1 from 799 to 800 By 99 from 800 to 899 By 1 from 899 to 900 By 99 from 900 to 999 By 1 from 999 to 1000 By 999 from 1000 to 1999 By 1 from 1999 to 2000 By 999 from 2000 to 2999 By 1 from 2999 to 3000 By 1 from 3000 to 3010 (10 cycles) By 9 from 3010 to 3019 (1 cycle) By 1 from 3019 to 3021 (2 cycles) Total: 28 cycles Without optimization: 2,235 cycles Note that this algorithm solves the problem without leading zeros. To use it with leading zeros, I used a hack: If range 700 to 1,000 with leading zeros is needed, use the algorithm for 10,700 to 11,000 and then substract 1,000 - 700 = 300 from the count of digit 1. Benchmark and Source code I tested the original approach, the same approach using %10 and the new solution for some large ranges, with these results: Original 104.78 seconds With %10 83.66 With Powers of Ten 0.07 A screenshot of the benchmark application: If you would like to see the full source code or run the benchmark, use these links: Complete Source code (in Clarion): http://sca.mx/ftp/countdigits.txt Compilable project and win32 exe: http://sca.mx/ftp/countdigits.zip Accepted answer noahlavine solution may be correct, but l just couldn’t follow the pseudo code, I think there are some details missing or not completely explained. Aaronaught solution seems to be correct, but the code is just too complex for my taste. I accepted strainer’s answer, because his line of thought guided me to develop this new solution.

    Read the article

  • Theme confusion in SpreadsheetML.

    - by dmaruca
    I've been fighting this all day. Inside my styles.xml file I have color information given like so: <fgColor theme="0" tint="-0.249977111117893" / ECMA 376 defines a theme color reference as: Index into the <clrScheme collection, referencing a particular <sysClr or <srgbClr value expressed in the Theme part. Ok, that sounds easy. Here is an excerpt from my clrScheme xml: <a:clrScheme name="Office" <a:dk1 <a:sysClr val="windowText" lastClr="000000" / </a:dk1 <a:lt1 <a:sysClr val="window" lastClr="FFFFFF" / </a:lt1 Index zero is black, and they are wanting to darken it? I can tell you that after the tint is applied, the color should be #F2F2F2. My confusion is what does theme="0" really mean? It can't possible mean to darken #000000. Checking MSDN only confuses me even more. From http://msdn.microsoft.com/en-us/library/dd560821.aspx note that the theme color integer begins counting from left to right in the palette starting with zero. Theme color 3 is the dark 2 text/background color. Actually, if you start counting at zero the third entry is Light 2. Dark 2 is the second one. Can anyone here shed some light on this subject for me? What does theme="0" really mean? Here is the VB6 code I have been working with to apply the tint. You can paste it into your vba editor and run the test sub. Public Type tRGB R As Byte G As Byte B As Byte End Type Public Type tHSL H As Double S As Double L As Double End Type Sub TestRgbTint() Dim c As tRGB RGB_Hex2Type "ffffff", c RGB_ApplyTint c, -0.249977111117893 Debug.Print Hex(c.R) & Hex(c.G) & Hex(c.B) End Sub Public Sub RGB_Hex2Type(ByVal HexString As String, RGB As tRGB) 'Remove the alpha channel if it exists If Len(HexString) = 8 Then HexString = mID(HexString, 3) End If RGB.R = CByte("&H" & Left(HexString, 2)) RGB.G = CByte("&H" & mID(HexString, 3, 2)) RGB.B = CByte("&H" & Right(HexString, 2)) End Sub Public Sub RGB_ApplyTint(RGB As tRGB, tint As Double) Const HLSMAX = 1# Dim HSL As tHSL If tint = 0 Then Exit Sub RGB2HSL RGB, HSL If tint < 0 Then HSL.L = HSL.L * (1# + tint) Else HSL.L = HSL.L * (1# - tint) + (HLSMAX - HLSMAX * (1# - tint)) End If HSL2RGB HSL, RGB End Sub Public Sub HSL2RGB(HSL As tHSL, RGB As tRGB) HSL2RGB_ByVal HSL.H, HSL.S, HSL.L, RGB End Sub Private Sub HSL2RGB_ByVal(ByVal H As Double, ByVal S As Double, ByVal L As Double, RGB As tRGB) Dim v As Double Dim R As Double, G As Double, B As Double 'Default color to gray R = L G = L B = L If L < 0.5 Then v = L * (1# + S) Else v = L + S - L * S End If If v > 0 Then Dim m As Double, sv As Double Dim sextant As Integer Dim fract As Double, vsf As Double, mid1 As Double, mid2 As Double m = L + L - v sv = (v - m) / v H = H * 6# sextant = Int(H) fract = H - sextant vsf = v * sv * fract mid1 = m + vsf mid2 = v - vsf Select Case sextant Case 0 R = v G = mid1 B = m Case 1 R = mid2 G = v B = m Case 2 R = m G = v B = mid1 Case 3 R = m G = mid2 B = v Case 4 R = mid1 G = m B = v Case 5 R = v G = m B = mid2 End Select End If RGB.R = R * 255# RGB.G = G * 255# RGB.B = B * 255# End Sub Public Sub RGB2HSL(RGB As tRGB, HSL As tHSL) Dim R As Double, G As Double, B As Double Dim v As Double, m As Double, vm As Double Dim r2 As Double, g2 As Double, b2 As Double R = RGB.R / 255# G = RGB.G / 255# B = RGB.B / 255# 'Default to black HSL.H = 0 HSL.S = 0 HSL.L = 0 v = IIf(R > G, R, G) v = IIf(v > B, v, B) m = IIf(R < G, R, G) m = IIf(m < B, m, B) HSL.L = (m + v) / 2# If HSL.L < 0 Then Exit Sub End If vm = v - m HSL.S = vm If HSL.S > 0 Then If HSL.L <= 0.5 Then HSL.S = HSL.S / (v + m) Else HSL.S = HSL.S / (2# - v - m) End If Else Exit Sub End If r2 = (v - R) / vm g2 = (v - G) / vm b2 = (v - B) / vm If R = v Then If G = m Then HSL.H = 5# + b2 Else HSL.H = 1# - g2 End If ElseIf G = v Then If B = m Then HSL.H = 1# + r2 Else HSL.H = 3# - b2 End If Else If R = m Then HSL.H = 3# + g2 Else HSL.H = 5# - r2 End If End If HSL.H = HSL.H / 6# End Sub

    Read the article

  • filling array gradually with data from user

    - by neville
    I'm trying to fill an array with words inputted by user. Each word must be one letter longer than previous and one letter shorter than next one. Their length is equal to table row index, counting from 2. Words will finally create a one sided pyramid, like : A AB ABC ABCD Scanner sc = new Scanner(System.in); System.out.println("Give the height of array: "); String[] words = new String[height]; for(int i=2; i<height+2; i++){ System.out.println("Give word with "+i+" letters."); words[i-2] = sc.next(); while( words[i-2].length()>i-2 || words[i-2].length()<words[i-3].length() ){ words[i-2] = sc.next(); } } Currently the while loop doesn't influence scanner at all :/

    Read the article

  • PHP mysql select results

    - by Jordan Pagaduan
    <?php $sql = " SELECT e.*, l.counts AS l_counts, l.date AS l_date, lo.counts AS lo_counts, lo.date AS lo_date FROM employee e LEFT JOIN logs l ON l.employee_id = e.employee_id LEFT JOIN logout lo ON lo.employee_id = e.employee_id WHERE e.employee_id =" .(int)$_GET['salary']; $query = mysql_query($sql); $rows = mysql_fetch_array($query); while($countlog = $rows['l_counts']) { echo $countlog; } echo $rows['first_name']; echo $rows['last_name_name']; ?> I got what I want to my first_name and last_name(get only 1 results). The l_counts I wanted to loop that thing but the result is not stop counting until my pc needs to restart. LoL. How can I get the exact result of that? I only need to get the exact results of l_counts. Thank you Jordan Pagaduan

    Read the article

  • What language has the longest "Hello world" program?

    - by Kip
    In most scripting languages, a "Hello world!" application is very short: print "Hello world" In C++, it is a little more complicated, requiring at least 46 non-whitespace characters: #include <cstdio> int main() { puts("Hello world"); } Java, at 75 non-whitespace characters, is even more verbose: class A { public static void main(String[] args) { System.out.print("Hello world"); } } Are there any languages that require even more non-whitespace characters than Java? Which language requires the most? Notes: I'm asking about the length of the shortest possible "hello world" application in a given language. A newline after "Hello world" is not required. I'm not counting whitespace, but I know there is some language that uses only whitespace characters. If you use that one you can count the whitespace characters.

    Read the article

  • Detecting (on the server side) when a Flex client disconnects from BlazeDS destination

    - by Alex Curtis
    Hi all, I'd like to know whether it's possible to easily detect (on the server side) when Flex clients disconnect from a BlazeDS destination please? My scenario is simply that I'd like to try and use this to figure out how long each of my clients are connected for each session. I need to be able to differentiate between clients as well (ie so not just counting the number of currently connected clients which I see in ds-console). Whilst I could program in a "I'm now logging out" process in my clients, I don't know whether this will fire if the client simply navigates away to another web page rather than going though said logout process. Can anyone suggest if there's an easy way to do this type of monitoring on the server side please. Many thanks, Alex

    Read the article

  • Update/Increment a single column on multiple rows at once

    - by Jordan Feldstein
    I'm trying to add rows to a column, keeping the order of the newest column set to one, and all other rows counting up from there. In this case, I add a new row with order=0, then use this query to update all the rows by one. "UPDATE favorits SET order = order+1" However, what happens is that all the rows are updated to the same value. I get a stack of favorites, all with order 6 for example, when it should be one with 1, the next with 2 and so on. How do I update these rows in a way that orders them the way they should be? Thanks, ~Jordan

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >