Search Results

Search found 298 results on 12 pages for 'ring'.

Page 12/12 | < Previous Page | 8 9 10 11 12 

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • Placing Varibles into an external Sheet

    - by Leslie Peer
    Trying to Build an Online D&d program which stores the character info into Tables my problem is the game works just fine while your playing but as soon as you exit game all varibles are lost which means you have to restart from scratch the next time you log on... So this is a Two Fold Question What is the Best type of External Sheet to save it on... and two How to access sheet for saving and Loading Below are Varibles <SCRIPT> Name1="Tabor Bloomfield"; Name2="Sam Wrightfield"; Name3="Gavin Hartfild"; Name4="Gail Quickfoot"; Name5="Robert Gragorian"; Name6="Peter Shain"; Class1="MagicUser"; Class2="Fighter"; Class3="Fighter"; Class4="Thief"; Class5="Cleric"; Class6="Fighter"; Level1=23; Level2=1; Level3=1; Level4=2; Level5=2; Level6=1; Hpts1=145; Hpts2=14; Hpts3=13; Hpts4=8; Hpts5=12; Hpts6=15; Armor1="Robe of Protection +5"; Armor2="Splinted Armor"; Armor3="Chain Armor"; Armor4="Leather Armor"; Armor5="Chain Armor"; Armor6="Splinted Armor"; Ac1a=5; Ac2a=3; Ac3a=3; Ac4a=4; Ac5a=2; Ac6a=3; Armor1b="Ring of Protection +5"; Armor2b="Small Shield"; Armor3b="Small Shield"; Armor4b="Wooden Shield"; Armor5b="Large Shield"; Armor6b="Small Shield"; Ac1b=5; Ac2b=1; Ac3b=1; Ac4b=1; Ac5b=1; Ac6b=1; Str1=21; Str2=16; Str3=14; Str4=13; Str5=14; Str6=13; Int1=19; Int2=11; Int3=12; Int4=13; Int5=14; Int6=13; Wis1=18; Wis2=12; Wis3=14; Wis4=13; Wis5=14; Wis6=12; Dex1=19; Dex2=14; Dex3=13; Dex4=15; Dex5=14; Dex6=12; Con1=19; Con2=15; Con3=16; Con4=13; Con5=12; Con6=10; Chr1=21; Chr2=14; Chr3=13; Chr4=12; Chr5=14; Chr6=13; </SCRIPT> File name ="gamestats" Path="trellian Webpage/droves E and F/gamestats have tryed html Page,Javascript,Creating a serperate table page and putting the varibles into cells...But at a lost on how to arrive at a solution

    Read the article

  • Build a gem with native extension (Gem::Installer::ExtensionBuildError)

    - by Arnaud Leymet
    I have the following configuration: uname -a : Linux 2.6.24.2 i686 GNU/Linux (Ubuntu) ruby -v : ruby 1.9.0 (2007-12-25 revision 14709) [i486-linux] rails -v : Rails 3.0.0.beta3 gem -v : 1.3.5 rake --version : rake, version 0.8.7 make -v : GNU Make 3.81 gem env : RUBYGEMS VERSION: 1.3.5 RUBY VERSION: 1.9.0 (2007-12-25 patchlevel 0) [i486-linux] INSTALLATION DIRECTORY: /usr/lib/ruby1.9/gems/1.9.0 RUBY EXECUTABLE: /usr/bin/ruby1.9 EXECUTABLE DIRECTORY: /usr/bin RUBYGEMS PLATFORMS: ruby x86-linux GEM PATHS: /usr/lib/ruby1.9/gems/1.9.0 /root/.gem/ruby/1.9.0 GEM CONFIGURATION: :update_sources = true :verbose = true :benchmark = false :backtrace = false :bulk_threshold = 1000 REMOTE SOURCES: http://gems.rubyforge.org/ And when I try this simple command: gem install nokogiri Here is what I get: # gem install nokogiri Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... yes checking for exsltFuncRegister() in -lexslt... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetValidStructuredErrors()... yes checking for xmlSchemaSetValidStructuredErrors()... yes checking for xmlSchemaSetParserStructuredErrors()... yes creating Makefile make cc -I. -I/usr/include/libxml2 -I/usr/include -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETPARSERSTRUCTUREDERRORS -I/opt/local/include/ -I/opt/local/include/libxml2 -I/opt/local/include -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -g -DXP_UNIX -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -o xml_document_fragment.o -c xml_document_fragment.c In the included file starting at ./nokogiri.h:75, From ./xml_document_fragment.h:4, From xml_document_fragment.c:1: ./xml_document.h:5:16: error: st.h : No file or folder with this type make: *** [xml_document_fragment.o] Error 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1/ext/nokogiri/gem_make.out The "gem_make.out" file contains the exact same information as described above. If I try with another gem: gem install gherkin Here is what I get: u# gem install gherkin Building native extensions. This could take a while... ERROR: Error installing gherkin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for main() in -lc... yes creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -o gherkin_lexer_ar.o -c gherkin_lexer_ar.c /Users/aslakhellesoy/scm/gherkin/tasks/../ragel/i18n/ar.c.rl:11:16: erreur: re.h : Aucun fichier ou dossier de ce type make: *** [gherkin_lexer_ar.o] Erreur 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30/ext/gherkin_lexer_ar/gem_make.out In fact whenever I try to install a gem with native extension, I get the same type of error. Would that ring a bell to anyone?

    Read the article

  • packet mmap send packet format

    - by SeregASM
    I want to improve packet transmitting performance. Before that I used raw sockets and now I study packet_mmap. I have packets(frames) which I already captured from kernel module from another PC, put to current PC and now I want to retransmit them to local interface with following forwarding. I have got example of packet_mmap, integrated it to my project, but I send fd_socket = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL)); memset(&my_addr, 0, sizeof(struct sockaddr_ll)); my_addr.sll_family = PF_PACKET; my_addr.sll_protocol = htons(ETH_P_ALL); strcpy(str_devname, "eth0"); strncpy(s_ifr.ifr_name, str_devname, sizeof(s_ifr.ifr_name)); ec = ioctl(fd_socket, SIOCGIFINDEX, &s_ifr); i_ifindex = s_ifr.ifr_ifindex; memset(&my_addr, 0, sizeof(struct sockaddr_ll)); my_addr.sll_family = AF_PACKET; my_addr.sll_protocol = ETH_P_ALL; my_addr.sll_ifindex = i_ifindex; bind(fd_socket, (struct sockaddr *) &my_addr, sizeof(struct sockaddr_ll) s_packet_req.tp_block_size = c_buffer_sz; s_packet_req.tp_frame_size = c_buffer_sz; s_packet_req.tp_block_nr = c_buffer_nb; s_packet_req.tp_frame_nr = c_buffer_nb; size = s_packet_req.tp_block_size * s_packet_req.tp_block_nr; if (setsockopt(fd_socket, SOL_PACKET, PACKET_TX_RING, (char *) &s_packet_req, sizeof(s_packet_req)) < 0) { perror("setsockopt: PACKET_TX_RING"); return; } if (c_sndbuf_sz) { printf("send buff size = %d\n", c_sndbuf_sz); if (setsockopt(fd_socket, SOL_SOCKET, SO_SNDBUF, &c_sndbuf_sz, sizeof(c_sndbuf_sz)) < 0){ perror("getsockopt: SO_SNDBUF"); exit(1); } } data_offset = TPACKET_HDRLEN - sizeof(struct sockaddr_ll); printf("data offset = %d bytes\n", data_offset); ps_header_start = (tpacket_hdr *) mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd_socket, 0); if (ps_header_start == (void*) -1) { perror("mmap"); exit(1); } Then I fill data ps_header = ((struct tpacket_hdr *) ((char *) ps_header_start + (c_buffer_sz * i_index))); if (!ps_header) { perror("ps_header") ; return NULL; } data = ((char*) ps_header) + data_offset; switch ((volatile uint32_t) ps_header->tp_status) { case TP_STATUS_AVAILABLE: printf("TP_STATUS_AVAILABLE, index=%d\n",i_index) ; memcpy(data, packet_data, size); pthread_mutex_lock(&index_locker) ; i_index++; pthread_mutex_unlock(&index_locker) ; if (i_index >= c_buffer_nb) { i_index = 0; first_loop = 0; } /* update packet len */ ps_header->tp_len = size; /* set header flag to USER (trigs xmit)*/ ps_header->tp_status = TP_STATUS_SEND_REQUEST; then I send ec_send = sendto(fd_socket, NULL, 0, 0, (struct sockaddr *) ps_sockaddr, sizeof(struct sockaddr_ll)); I have got no errors, ec_send=not null size of sended data. But there are no data routed to destination host. So, I ask - what data I should pass to ring buffer, now I include headers ip,tcp, should I include MAC header? - May be I have to set additional flags to route my packets.

    Read the article

  • "Could not establish secure channel for SSL/TLS" in .NET CF application on smart phone

    - by Stefan Mohr
    I have a stubborn communications issue with an application running on the .NET Compact Framework 3.5 on Windows Mobile smartphones. I am constructing a web request using this code: UTF8Encoding encoding = new System.Text.UTF8Encoding(); byte[] Data = encoding.GetBytes(HttpUtility.ConstructQueryString(parameters)); httpRequest = WebRequest.Create((domain)) as HttpWebRequest; httpRequest.Timeout = 10000000; httpRequest.ReadWriteTimeout = 10000000; httpRequest.Credentials = CredentialCache.DefaultCredentials; httpRequest.Method = "POST"; httpRequest.ContentType = "application/x-www-form-urlencoded"; httpRequest.ContentLength = Data.Length; Stream SendReq = httpRequest.GetRequestStream(); SendReq.Write(Data, 0, Data.Length); SendReq.Close(); HttpWebResponse httpResponse = (HttpWebResponse)httpRequest.GetResponse(); return httpResponse.GetResponseStream(); The web service functions by receiving a JSON-encoded document as part of the URL (eg. https://site.com/ws/sync??document={"version":"1.0.0","items":[{"item_1":"item1"}]}&user=usr&password=pw), and as a response receives another JSON document as response data. This code runs fine on all emulators and PDAs running WM 5 and 6. We have seen an issue with a couple of customers running Treo smartphones (and only on the Sprint network). We have tested the code on an identical device on the AT&T network (via DeviceAnywhere) and once again the code worked as we expected. This has to be some sort of security policy on the phone, but we've been unable to determine a workaround or diagnose it thoroughly as we cannot reproduce it in house and have had to resort to getting users to assist with running test drivers for us. When this code executes, the user's device throws the following exception: System.Net.WebException Could not establish secure channel for SSL/TLS Stack trace: at System.Net.HttpWebRequest.finishGetRequestStream() at System.Net.HttpWebRequest.GetRequestStream() at OurApp.GetResponseStream(String domain, Hashtable parameters) inner exception: System.IO.IOException Authentication failed because the remote party has closed the transport stream. Stack trace: at System.Net.SslConnectionState.ClientSideHandshake() at System.Net.SslConnectionState.PerformClientHandShake() at System.Net.Connection.connect(Object ignored) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() Examining the server Apache logs shows no hits from the user's IP - I don't think the device is even attempting to send a packet before failing. If relevant, the server is running Apache on Linux and is written using the TurboGears Python framework. The server certificate is issued by a CA and is still valid. The test driver where this error was copied from was not code signed, however the same error (without the error messages) is signed with a GeoTrust certificate so we don't believe this is a code signing issue. The application installs and launches without issue on all phones - it's just establishing this SSL connection that is breaking for these users. One significant issue in troubleshooting is that there is a substantial inconvenience each time we try out a solution (need to find a "volunteer" customer), so we're really looking for a silver bullet or a better understanding of the handshaking process so we can be reasonably confident we only need to ask the user to test it one or two more times. One final mention: we have tried the sync both over ActiveSync and also over GPRS with identical results. Any thoughts would be greatly appreciated!

    Read the article

  • How to develop an english .com domain value rating algorithm?

    - by Tom
    I've been thinking about an algorithm that should rougly be able to guess the value of an english .com domain in most cases. For this to work I want to perform tests that consider the strengths and weaknesses of an english .com domain. A simple point based system is what I had in mind, where each domain property can be given a certain weight to factor it's importance in. I had these properties in mind: domain character length Eg. initially 20 points are added. If the domain has 4 or less characters, no points are substracted. For each extra character, one or more points are substracted on an exponential basis (the more characters, the higher the penalty). domain characters Eg. initially 20 points are added. If the domain is only alphabetic, no points are substracted. For each non-alhabetic character, X points are substracted (exponential increase again). domain name words Scans through a big offline english database, including non-formal speech, eg. words like "tweet" should be recognized. Question 1 : where can I get a modern list of english words for use in such application? Are these lists available for free? Are there lists like these with non-formal words? The more words are found per character, the more points are added. So, a domain with a lot of characters will still not get a lot of points. words hype-level I believe this is a tricky one, but this should be the cause to differentiate perfect but boring domains from perfect and interesting domains. For example, the following domain is probably not that valueable: www.peanutgalaxy.com The algorithm should identify that peanuts and galaxies are not very popular topics on the web. This is just an example. On the other side, a domain like www.shopdeals.com should ring a bell to the hype test, as shops and deals are quite popular on the web. My initial thought would be to see how often these keywords are references to on the web, preferably with some database. Question 2: is this logic flawed, or does this hype level test have merit? Question 3: are such "hype databases" available? Or is there anything else that could work offline? The problem with eg. a query to google is that it requires a lot of requests due to the many domains to be tested. domain name spelling mistakes Domains like "freemoneyz.com" etc. are generally (notice I am making a lot of assumptions in this post but that's necessary I believe) not valueable due to the spelling mistakes. Question 4: are there any offline APIs available to check for spelling mistakes, preferably in javascript or some database that I can use interact with myself. Or should a word list help here as well? use of consonants, vowels etc. A domain that is easy to pronounce (eg. Google) is usually much more valueable than one that is not (eg. Gkyld). Question 5: how does one test for such pronuncability? Do you check for consonants, vowels, etc.? What does a valueable domain have? Has there been any work in this field, where should I look? That is what I came up with, which leads me to my final two questions. Question 6: can you think of any more english .com domain strengths or weaknesses? Which? How would you implement these? Question 7: do you believe this idea has any merit or all, or am I too naive? Anything I should know, read or hear about? Suggestions/comments? Thanks!

    Read the article

  • Write a program for a report derived from the data in the data file JEWELRY. The data is to be input

    - by Taylor
    here is the JEWELRY file 0011 Money_Clip 2.000 50.00 Other 0035 Paperweight 1.625 175.00 Other 0457 Cuff_Bracelet 2.375 150.00 Bracelet 0465 Links_Bracelet 7.125 425.00 Bracelet 0585 Key_Chain 1.325 50.00 Other 0595 Cuff_Links 0.625 525.00 Other 0935 Royale_Pendant 0.625 975.00 Pendant 1092 Bordeaux_Cross 1.625 425.00 Cross 1105 Victory_Medallion 0.875 30.00 Pendant 1111 Marquis_Cross 1.375 70.00 Cross 1160 Christina_Ring 0.500 175.00 Ring 1511 French_Clips 0.687 375.00 Other 1717 Pebble_Pendant 1.250 45.00 Pendant 1725 Folded_Pendant 1.250 45.00 Pendant 1730 Curio_Pendant 1.063 275.00 Pendant this is the program i have used #include <iostream> #include <string> #include <iomanip> #include <fstream> using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile>> product[x].itemCode; inFile>> product[x].name; inFile>> product[x].size; inFile>> product[x].amount; inFile>> product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; x<count;x++) { if (product[x].name>product[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main

    Read the article

  • To copy data from a webpage into an array of structs and sorted by“name” before producing the data.

    - by Taylor
    include include include include using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile product[x].itemCode; inFile product[x].name; inFile product[x].size; inFile product[x].amount; inFile product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; xproduct[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main THE FILE THAT NEEDS TO PRINT AND BE SORTED IN ALPHABETICAL ORDER 0011 Money_Clip 2.000 50.00 Other 0035 Paperweight 1.625 175.00 Other 0457 Cuff_Bracelet 2.375 150.00 Bracelet 0465 Links_Bracelet 7.125 425.00 Bracelet 0585 Key_Chain 1.325 50.00 Other 0595 Cuff_Links 0.625 525.00 Other 0935 Royale_Pendant 0.625 975.00 Pendant 1092 Bordeaux_Cross 1.625 425.00 Cross 1105 Victory_Medallion 0.875 30.00 Pendant 1111 Marquis_Cross 1.375 70.00 Cross 1160 Christina_Ring 0.500 175.00 Ring 1511 French_Clips 0.687 375.00 Other 1717 Pebble_Pendant 1.250 45.00 Pendant 1725 Folded_Pendant 1.250 45.00 Pendant 1730 Curio_Pendant 1.063 275.00 Pendant

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Introduction to Extended Events

    - by extended_events
    For those fighting with all the Extended Event terminology, let's step back and have a small overall Introduction to Extended Events. This post will give you a simplified end to end view through some of the elements in Extended Events. Before we start, let’s review the first Extented Events that we are going to use: -          Events: The SQL Server code is populated with event calls that, by default, are disabled. Adding events to a session enables those event calls. Once enabled, they will execute the set of functionality defined by the session. -          Target: This is an Extended Event Object that can be used to log event information. Also it is important to understand the following Extended Event concept: -          Session: Server Object created by the user that defines functionality to be executed every time a set of events happen.   It’s time to write a small “Hello World” using Extended Events. This will help understand the above terms. We will use: -          Event sqlserver. error_reported: This event gets fired every time that an error happens in the server. -          Target package0.asynchronous_file_target: This target stores the event data in disk. -          Session: We will create a session that sends all the error_reported events to the ring buffer. Before we get started, a quick note: Don’t run this script in a production environment. Even though, we are going just going to be raise very low severity user errors, we don't want to introduce noise in our servers. -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data1.xel' , metadatafile = 'c:\temp\data1.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- GENERATES AN ERROR RAISERROR (N'HELLO WORLD', -- Message text.            1, -- Severity,            1, 7, 3, N'abcde'); -- Other parameters GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data1*.xel','c:\temp\data1*.xem', null, null) This query will output the event data with our first hello world in the Extended Event format: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-02-27T03:08:04.210Z"><data name="error"><value>50000</value><text /></data><data name="severity"><value>1</value><text /></data><data name="state"><value>1</value><text /></data><data name="user_defined"><value>true</value><text /></data><data name="message"><value>HELLO WORLD</value><text /></data></event> More on parsing event data in this post: Reading event data 101 Now let's move that lets move on to the other three Extended Event objects: -          Actions. This Extended Objects actions get executed before events are published (stored in buffers to be transferred to the targets). Currently they are used additional data (like the TSQL Statement related to an event, the session, the user) or generate a mini dump.   -          Predicates: Predicates express are logical expressions that specify what predicates to fire (E.g. only listen to errors with a severity greater than 16). This are composed of two Extended Objects: o   Predicate comparators: Defines an operator for a pair of values. Examples: §  Severity > 16 §  error_message = ‘Hello World!!’ o   Predicate sources: These are values that can be also used by the predicates. They are generic data that isn’t usually provided in the event (similar to the actions). §  Sqlserver.username = ‘Tintin’ As logical expressions they can be combined using logical operators (and, or, not).  Note: This pair always has to be first an event field or predicate source and then a value         Let’s do another small Example. We will trigger errors but we will use the ones that have severity >= 10 and the error message != ‘filter’. To verify this we will use the action sql_text that will attach the sql statement to the event data: -- TRIES TO ELIMINATE PREVIOUS SESSIONS BEGIN TRY       DROP EVENT SESSION test_session ON SERVER END TRY BEGIN CATCH END CATCH GO   -- CREATES THE SESSION CREATE EVENT SESSION test_session ON SERVER ADD EVENT sqlserver.error_reported       (ACTION (sqlserver.sql_text) WHERE severity = 2 and (not (message = 'filter'))) ADD TARGET package0.asynchronous_file_target -- CONFIGURES THE FILE TARGET (set filename = 'c:\temp\data2.xel' , metadatafile = 'c:\temp\data2.xem') GO   -- STARTS THE SESSION ALTER EVENT SESSION test_session ON SERVER STATE = START GO   -- THIS EVENT WILL BE FILTERED BECAUSE SEVERITY != 2 RAISERROR (N'PUBLISH', 1, 1, 7, 3, N'abcde'); GO -- THIS EVENT WILL BE FILTERED BECAUSE MESSAGE = 'FILTER' RAISERROR (N'FILTER', 2, 1, 7, 3, N'abcde'); GO -- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); GO   -- STOPS LISTENING FOR THE EVENT ALTER EVENT SESSION test_session ON SERVER STATE = STOP GO   -- REMOVES THE EVENT SESSION FROM THE SERVER DROP EVENT SESSION test_session ON SERVER GO -- REMOVES THE EVENT SESSION FROM THE SERVER select CAST(event_data as XML) as event_data from sys.fn_xe_file_target_read_file ('c:\temp\data2*.xel','c:\temp\data2*.xem', null, null)   This last statement will output one event with the following data: <event name="error_reported" package="sqlserver" id="100" version="1" timestamp="2010-03-05T23:15:05.481Z">   <data name="error">     <value>50000</value>     <text />   </data>   <data name="severity">     <value>2</value>     <text />   </data>   <data name="state">     <value>1</value>     <text />   </data>   <data name="user_defined">     <value>true</value>     <text />   </data>   <data name="message">     <value>PUBLISH</value>     <text />   </data>   <action name="sql_text" package="sqlserver">     <value>-- THIS ERROR WILL BE PUBLISHED RAISERROR (N'PUBLISH', 2, 1, 7, 3, N'abcde'); </value>     <text />   </action> </event> If you see more events, check if you have deleted previous event files. If so, please run   -- Deletes previous event files EXEC SP_CONFIGURE GO EXEC SP_CONFIGURE 'xp_cmdshell', 1 GO RECONFIGURE GO XP_CMDSHELL 'del c:\temp\data*.xe*' GO   or delete them manually.   More Info on Events: Extended Event Events More Info on Targets: Extended Event Targets More Info on Sessions: Extended Event Sessions More Info on Actions: Extended Event Actions More Info on Predicates: Extended Event Predicates Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • CodePlex Daily Summary for Wednesday, November 02, 2011

    CodePlex Daily Summary for Wednesday, November 02, 2011Popular ReleasesWYGIWYS Driver: 1.1.2.0 release: Win32, x86, kernel mode driver.Reset IPv6: 1.0.0.0: First release. Only works for Spanish and English operating system editions.BExplorer (Better Explorer): Better Explorer 2.0.0.631 Alpha: Changelog: Added: Some new functions in ribbon Added: Possibility to choose displayed columns Added: Basic Search Fixed: Some bugs after navigation Fixed: Attempt to fix slow navigation and slow start Known issues: - BreadcrumbBar fails on some situations - Basic search not work quite well in some situations Please if anyone find bugs be kind and report them at the Issue Tracker! Thanks!DotNetNuke® Community Edition: 05.06.04: Major Highlights Fixed issue with upgrades on systems that had upgraded the Telerik library to 6.0.0 Fixed issue with Razor Host upgrade to 5.6.3 The logic for module administration checks contains incorrect logic in 1 place, opening the possibility of a user with edit permissions gaining access to functionality they should not have through a particularly crafted url Security FixesBrowsers support the ability to remember common strings such as usernames/addresses etc. Code was adde...Terminals: Version 2.0 - Beta 2 Release: The team has finally put the nail into the official release date for version 2.0. As bugs are winding down on the 2.0 Roadmap we decided to push out another build - the first 2.0 Beta build. Please take time to use and abuse this release. We left logging in place, and this is a debug build so be sure to submit your logs on each bug reported, and please do report all bugs! Check the source code page on the site, this beta includes all commits since (and including) the 90428 check-in back i...iTuner - The iTunes Companion: iTuner 1.4.4322: Added German (unverified, apologies if incorrect) Properly source invariant resources with correct resIDs Replaced obsolete lyric providers with working providers Fix Pseudolater to correctly morph every third char Fix null reference in CatalogBaseTrack Folder Changes: Track Folder Changes 1.0: Track Folder Changes 1.0 (binary)Windows Workflow Foundation on Codeplex: Microsoft.Activities v1.8.8: Microsoft.Activities Overview How do I install Microsoft.Activities? Updates in this release9318Technical Analysis Engine for .NET: Technical Analysis Engine 1.25: What's new in the 1.25 release (2011-11-01): - Added Williams %R indicator - Added Moving Average Envelopes indicatorBF3Rcon.NET: BF3Rcon 3.0: This release is targeted for RCON documentation based on R3. Everything should be beta stable, but it's alpha because I haven't been able to fully test it. When a stable release is ready, a proper changelog will be kept. Important Edit: There's one method that will keep this from working in Mono. GeneratePasswordHash uses void HashAlgorithm.Dispose(), which isn't in Mono. This will have to be changed to Clear() in the next release. If anyone needs a Mono version of this immediately, you can...BoxWorld: BoxWorld_2011.10.30: BoxWorld - 8.0.1110.30 This is the initial release of BoxWorld. I'd recommend downloading the installer as it contains the compiled code and everything all nicely contained. By default, you end up with this directory structure: C:\Program Files\ViperWorks\BoxWorld C:\Program Files\ViperWorks\BoxWorld\Data C:\Program Files\ViperWorks\BoxWorld\Interface C:\Program Files\ViperWorks\BoxWorld\Source In the root you have the compiled EXE files, one for the main release, one for the LITE release ...VidCoder: 1.2.1: Fixed a couple regressions: video encoder was blank in queue and crashes with the High Profile preset when opening the Settings window. Fixed problem with auto-update introduced in 1.2.0. If you have 1.2.0 you will need to update manually to get this.Koober: Koober - The Ebook Creator 0.2: The official release of Koober as Open source. Koober is a ebook creator for Windows, and Koob Reader is the reader.patterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Media Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.AcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.New ProjectsAlgorithms: algorithms, ACM C++ programs, Electronic design automation algorithmsAll-In-One: Ein Programm - Alle Funktionen Einfach Komponente erstellen - und schon gibt es eine Funktion mehr!Bar Code File Copy: Bar Code File Copy is a mobile app that allows user to initiate file transfer from a PC to another with the help of barcodes and Windows Phone 7.BlekBoox: Google Books API CF-Soft: this is my workspace to study C# programming.CodePaste.NET Tray Application: A simple tray application to create snippets on CodePaste.NET. Right click the app and select New Snippet. A snippet will be created, the contents of your clipboard will be pasted and your clipboard will be replaced with the URL after the code snippet is successfully posted.Common libs of fanthos: Fanthos.Libs.Controls provides panel and button with special draw and special events. Fanthos.Libs.Network provides class that you can use the method likes Async WCF in dotNet 2.0.Database Filesystem: The DBFS project implements a filesystem paradigm to the storage of objects in a database. The primary development language is Visual Basic .NET.Destructible Texture2D: The project shows how to create a destructible texture2D in XNA. This project is developed using the XNA framework and C# programming languageEject and Close CD Tray: For some reason i cannot seem to find any command line program to eject or close the cd tray. So i've created my own version and sharing it for everyone to use. It is an extremely simplistic program with the sole aim of ejecting or closing the cd tray and nothing else.Excel add-in for pricing options: Price options using various distributions.FDFToolkit .NET: FDFToolkit .NET makes it easier to read, write, create, populate, merge FDF, XDP, XFDF, and XML data with Acrobat and LiveCycle PDF forms. FDFToolkit.net utilizes iTextSharp 4.x technologies, under MPL license.Fiction Catalog: A catalog project designed to store information about fictional literature.Gangbee Software: This is customized version of Nopcommerce 1.9Guess Number: This simple program is letting you to guess number from 1 to 20. H? th?ng Khách s?n: H? th?ng qu?n lí khách s?nHelloTipi Photo Downloader: Permet de télécharger les photos en qualité original de votre site de famille [url:HelloTipi|http://www.hellotipi.com].Image Tagger & Resizer: Resize, and text in the lower right of picture with i.e. copyright information.Kinect Sensor Controller: A project using kinect sensor to control objectsKJFramework.Dynamic ??????: KJFramework.Dynamic ???????? KJFramework????????????????????,?????????: KJFramework.Dynamic(??????). ???????????????????????,???????????????,?????????, ????????????????????,????,???????????????,??????????(??,???????)。 ????,?????????????????????(Component), ??????DynamicDomainComponent??。 ??,????????????,???????????DynamicDomainService??。 ??????,???????????? ???????DynamicDomainService。 ??DynamicDomainService??,????????????????。 ?????, ????????????(Tunnel)???,??????(Tunnel)...Lan ETS Player Registration Software: This is the custom software that the Lan ETS staff uses to register people when they arrive at the LAN party.LINQ to Calil: LINQ to Calil ????? (http://calil.jp/) ???????? API ? LINQ ????????。???????????? LINQ ???????????????。.NET Framework 4 ???????、Silverlight ??????????。 LINQ to Calil ??????????????????????????????????????。Login Form in VB: <project name> makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>.MbpGpsTester: MbpGpsTester is a gps test tool.Mi proyecto: Mi proyectoMy NuGET Packages: This project contains a custom program that allows you to quickly update NuGET packages from multiple sources and package feeds. It also contains set-up of some packages that are used by HydroDesktop.MyyoutubeforWp7: MyyoutubeforWp7 is a youtube application where users can search and view the videoNERO'S TOOLS: NEROS TOOLS will be an area of programming tools and resources for the automation control industry and the everyday usage of major control systems software.NetWebScript: NetWebScript is a IL to JavaScript compiler. It allow to write complex web application in C# or any other .Net language. It provide Visual Studio integration to debug into orginial sources. It allow to share code with server.Northern Lights WP7 Toolkit: Northern Lights WP7 Toolkit contains some common tools for WP7 Developers.SccmBoundaryHealthTool: .NET application that will search the AD forest for SCCM boundary configurations and report the following: * Overlapping boundaries, including those inter-boundary type, e.g. IP subnet overlaps with AD site. * Unassigned AD sites * Orphaned AD sites in SCCMSeven: Seven is a set of tools for programs development using the Oberon 07 language. SOAP Test Application (with EWS Tools): Application that allows testing of SOAP implementations, in particular Exchange Web Services.SocialCastDotNet: C# library for accessing the SocialCast APISosyal: Sosyal as.sqlce_perfDemo: just a bit of code to demonstrate sqlce insert performance on the desktop. I'm using sqlce on the desktop to persist data as its the fastest thing I can find. Insert perfromance seems to top out at around 65,000records/sec on my laptop but it also seems to limited at this point by my hard drive speeds. The sample code should be able to accept a dataTable, dataReader or dataset and persit to a sqlce database. Also in the code is a small wrapper on the sqlce connectionstring (localconne...SqlMon for OracleClient: This is a project to doing a tool just like Quest SQL Monitor that capture sql sentence in client side.So it's so cool that juect join in and user it. It's developed in C# and Visual Studio 2010 and bases on the EasyHook that is a powerfull .NET hook library.It injects in oci.dll and hook the "OCIStmtPrepare" function which excute before sql excuting,through the parameter "stms" we can get the sqltext and display itStackOverflow Offline Reader using a pdf reader: Using one or more keywords and number of questions to download, the app creates and displays a single pdf file of the questions and answers pertaining to the keyword(s). This enables the user to read questions in an offline manner or to read in a top/down fashion. std::streambuf wrapper for COM IStream: This provides a subclass of std::streambuf that wraps a COM IStream, so you can use an IStream with any C++ code that uses iostreams or the STL algorithms with a streambuf_iterator. It does no internal buffering whatsoever (I'd be happy to change this if someone needs it).step out ring signatures .net: Simple program to demonstrate how srs works in real world of .net.System Center Service Manager 2010 SP1 WCF and Web Service: Simple WCF Service and Web Service for SCSM 2010 SP1.System32 - SDK de componentes y acceso a datos para .NET: System32 - SDK de componentes y acceso a datos para .NET Taha Mail: A Java Web Based MailTest TDS: Test TFS - 1Track Folder Changes: Displays in real time a tree with the list of created/deleted/changed files in a specific directory and its subdirectoriesumbTrafficCentral: A plugin for Umbraco that adds robust and scalable handling of redirects, storing the data in the media library. WebDAV Test Application: Application to test WebDAV functionality.XnaXaml: XnaXaml allows you to add a XAML file to your Content Pipeline in XNA and edit straight from Visual Studio. It then takes the XAML file and parses through it to create mock objects that can be used to render controls to the screen in XNA. These mock objects can also be used to pass data to any GUI system you choose.???-?????????? ?? ????????????? ????????? ??????????? ??????? ???: Under construction

    Read the article

  • How do I use texture-mapping in a simple ray tracer?

    - by fastrack20
    I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading. The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map. Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour. When this is done, it simply appears as this: Here are a few code snippets: Color getColor(const Object *object,const Ray *ray, float *t) { if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) { float distance = *t; Point pnt = ray->origin + ray->direction * distance; Point oc = object->center; Vector ve = Point(oc.x,oc.y,oc.z+1) - oc; Normalize(&ve); Vector vn = Point(oc.x,oc.y+1,oc.z) - oc; Normalize(&vn); Vector vp = pnt - oc; Normalize(&vp); double phi = acos(-vn.dot(vp)); float v = phi / M_PI; float u; float num1 = (float)acos(vp.dot(ve)); float num = (num1 /(float) sin(phi)); float theta = num /(float) (2 * M_PI); if (theta < 0 || theta == NAN) {theta = 0;} if (vn.cross(ve).dot(vp) > 0) { u = theta; } else { u = 1 - theta; } int x = (u * IMAGE_WIDTH) -1; int y = (v * IMAGE_WIDTH) -1; int p = (y * IMAGE_WIDTH + x)*3; return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]); } else { return object->color; } }; I call the colour code here in Trace: if (object->materialType == MATTE) return getColor(object, ray, &t); Ray shadowRay; int isInShadow = 0; shadowRay.origin.x = pHit.x + nHit.x * bias; shadowRay.origin.y = pHit.y + nHit.y * bias; shadowRay.origin.z = pHit.z + nHit.z * bias; shadowRay.direction = light->object->center - pHit; float len = shadowRay.direction.length(); Normalize(&shadowRay.direction); float LdotN = shadowRay.direction.dot(nHit); if (LdotN < 0) return 0; Color lightColor = light->object->color; for (int k = 0; k < numObjects; k++) { if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) { if (objects[k]->materialType == GLASS) lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color else isInShadow = 1; break; } } lightColor *= 1.f/(len*len); return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN; } I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image. Thanks.

    Read the article

  • Windows store apps: ScrollViewer with dinamic content

    - by Alexandru Circus
    I have a scrollViewer with an ItemsControl (which holds rows with data) as content. The data from these rows is grabbed from the server so I want to display a ProgressRing with a text until the data arrives. Basically I want the content of the ScrollViewer to be a grid with progress ring and a text and after the data arrives the content to be changed with my ItemsControl. The problem is that the ScrollViewer does not accept more than 1 element as content. Please tell me how can I solve this problem. (I'm a C# beginner) <FlipView x:Name="OptionPagesFlipView" Grid.Row="1" TabNavigation="Cycle" SelectionChanged="OptionPagesFlipView_SelectionChanged" ItemsSource="{Binding OptionsPageItems}"> <FlipView.ItemTemplate> <DataTemplate x:Name="OptionMonthPageTemplate"> <ScrollViewer x:Name="OptionsScrollViewer" HorizontalScrollMode="Disabled" HorizontalAlignment="Stretch" VerticalScrollBarVisibility="Auto"> <ItemsControl x:Name="OptionItemsControl" ItemsSource="{Binding OptionItems, Mode=OneWay}" Visibility="Collapsed"> <ItemsControl.ItemTemplate> <DataTemplate x:Name="OptionsChainItemTemplate"> <Grid x:Name="OptionItemGrid" Background="#FF9DBDF7" HorizontalAlignment="Stretch"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <!-- CALL BID --> <TextBlock Text="Bid" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="0" Grid.Column="0" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallBidTextBlock" Text="{Binding CallBid}" Foreground="Blue" HorizontalAlignment="Left" Grid.Row="1" Grid.Column="0" Margin="5,0,5,5" FontSize="18"/> <!-- CALL ASK --> <TextBlock Text="Ask" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="2" Grid.Column="0" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallAskTextBlock" Text="{Binding CallAsk}" Foreground="Blue" HorizontalAlignment="Left" Grid.Row="3" Grid.Column="0" Margin="5,0,5,0" FontSize="18"/> <!-- CALL LAST --> <TextBlock Text="Last" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="0" Grid.Column="1" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallLastTextBlock" Text="{Binding CallLast}" Foreground="Blue" HorizontalAlignment="Left" Grid.Row="1" Grid.Column="1" Margin="5,0,5,5" FontSize="18"/> <!-- CALL NET CHANGE --> <TextBlock Text="Net Ch" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="2" Grid.Column="1" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallNetChTextBlock" Text="{Binding CallNetChange}" Foreground="{Binding CallNetChangeForeground}" HorizontalAlignment="Left" Grid.Row="3" Grid.Column="1" Margin="5,0,5,5" FontSize="18"/> <!-- STRIKE --> <TextBlock Text="Strike" Foreground="Gray" HorizontalAlignment="Center" Grid.Row="1" Grid.Column="2" FontSize="18" Margin="5,0,5,0"/> <Border Background="{Binding StrikeBackground}" HorizontalAlignment="Center" Grid.Row="2" Grid.Column="2" Margin="5,0,5,5"> <TextBlock x:Name="StrikeTextBlock" Text="{Binding Strike}" Foreground="Blue" FontSize="18"/> </Border> <!-- PUT LAST --> <TextBlock Text="Last" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="0" Grid.Column="3" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="PutLastTextBlock" Text="{Binding PutLast}" Foreground="Blue" HorizontalAlignment="Right" Grid.Row="1" Grid.Column="3" Margin="5,0,5,5" FontSize="18"/> <!-- PUT NET CHANGE --> <TextBlock Text="Net Ch" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="2" Grid.Column="3" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="PutNetChangeTextBlock" Text="{Binding PutNetChange}" Foreground="{Binding PutNetChangeForeground}" HorizontalAlignment="Right" Grid.Row="3" Grid.Column="3" Margin="5,0,5,5" FontSize="18"/> <!-- PUT BID --> <TextBlock Text="Bid" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="0" Grid.Column="4" FontSize="18" Margin="5,0,15,0"/> <TextBlock x:Name="PutBidTextBlock" Text="{Binding PutBid}" Foreground="Blue" HorizontalAlignment="Right" Grid.Row="1" Grid.Column="4" Margin="5,0,15,5" FontSize="18"/> <!-- PUT ASK --> <TextBlock Text="Ask" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="2" Grid.Column="4" FontSize="18" Margin="5,0,15,0"/> <TextBlock x:Name="PutAskTextBlock" Text="{Binding PutAsk}" Foreground="Blue" HorizontalAlignment="Right" Grid.Row="3" Grid.Column="4" Margin="5,0,15,5" FontSize="18"/> <!-- BOTTOM LINE SEPARATOR--> <Rectangle Fill="Black" Height="1" Grid.ColumnSpan="5" VerticalAlignment="Bottom" Grid.Row="3"/> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> <!--<Grid> <Grid.RowDefinitions> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <ProgressRing x:Name="CustomProgressRing" Height="40" Width="40" IsActive="true" Grid.Column="0" Margin="20" Foreground="White"/> <TextBlock x:Name="CustomTextBlock" Height="auto" Width="auto" FontSize="25" Grid.Column="1" Margin="20"/> <Border BorderBrush="#FFFFFF" BorderThickness="1" Grid.ColumnSpan="2"/> </Grid>--> </ScrollViewer> </DataTemplate> </FlipView.ItemTemplate>

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • Select list auto update on any kind of change?

    - by Tom Irons
    I have a jQuery that when you click on a select option it will show the next one, but you have to click, you cant just use the down arrow or "tab" to the next option. I am wondering what options do I have to make this work? Here is my jQuery: function typefunction() { var itemTypes = jQuery('#type'); var select = this.value; itemTypes.change(function () { if ($(this).val() == '1-Hand') { $('.1-Hand').show(); $('.2-Hand').hide(); $('.off').hide(); $('.Armor').hide(); } else $('.1-Hand').hide(); if ($(this).val() == '2-Hand') { $('.2-Hand').show(); $('.1-Hand').hide(); $('.off').hide(); $('.Armor').hide(); } else $('.2-Hand').hide(); if ($(this).val() == 'Armor') { $('.Armor').show(); $('.2-Hand').hide(); $('.off').hide(); $('.1-Hand').hide(); } else $('.Armor').hide(); if ($(this).val() == 'Off-Hand') { $('.Off').show(); $('.2-Hand').hide(); $('.1-Hand').hide(); $('.Armor').hide(); } else $('.Off').hide(); if ($(this).val() == '1-Hand') { $('.one-hand-dps').show(); $('.item-armor').hide(); $('.two-hand-dps').hide(); } else $('.one-hand-dps').hide(); if ($(this).val() == '2-Hand') { $('.two-hand-dps').show(); $('.one-hand-dps').hide(); $('.item-armor').hide(); } else $('.two-hand-dps').hide(); if ($(this).val() == 'Armor') { $('.item-armor').show(); $('.one-hand-dps').hide(); $('.two-hand-dps').hide(); } else $('.item-armor').hide(); }); } And the HTML: <div class="input-group item"> <span class="input-group-addon">Type</span> <select id="type" name="type" class="form-control" onclick="typefunction(); itemstats(); Armor(); OffHand(); TwoHand();"> <option value="Any Type">Any Type</option> <option value="1-Hand">1-Hand</option> <option value="2-Hand">2-Hand</option> <option value="Armor">Armor</option> <option value="Off-Hand">Off-Hand</option> </select> </div> <div class="input-group item"> <span class="1-Hand input-group-addon" style="display: none;">Sub-Type</span> <select class="1-Hand form-control" name="sub[1]" style="display: none;"> <option value="All 1-Hand Item Types">All 1-Hand Item Types</option> <option>Axe</option> <option>Ceremonial Knife</option> <option>Hand Crossbow</option> <option>Dagger</option> <option>Fist Weapon</option> <option>Mace</option> <option>Mighty Weapon</option> <option>Spear</option> <option>Sword</option> <option>Wand</option> </select> </div> <div class="input-group"> <span class="2-Hand input-group-addon" style="display: none; ">Sub-Type</span> <select class="2-Hand form-control" name="sub[2]" style="display: none;"> <option>All 2-Hand Item Types</option> <option>Two-Handed Axe</option> <option>Bow</option> <option>Diabo</option> <option>Crossbow</option> <option>Two-Handed Mace</option> <option>Two-Handed Mighty Weapon</option> <option>Polearm</option> <option>Staff</option> <option>Two-Handed Sword</option> </select> </div> <div class="input-group"> <span class="Armor input-group-addon" style="display: none;">Sub-Type</span> <select class="Armor form-control" name="sub[3]" style="display:none;"> <option>All Armor Item Types</option> <option>Amulet</option> <option>Belt</option> <option>Boots</option> <option>Bracers</option> <option>Chest Armor</option> <option>Cloak</option> <option>Gloves</option> <option>Helm</option> <option>Pants</option> <option>Mighty Belt</option> <option>Ring</option> <option>Shoulders</option> <option>Spirit Stone</option> <option>Voodoo Mask</option> <option>Wizard Hat</option> </select> </div> <div class="input-group"> <span class="Off input-group-addon" style="display: none;">Sub-Type</span> <select class="Off form-control" name="sub[4]" style="display:none;"> <option>All Off-Hand Item Types</option> <option>Mojo</option> <option>Source</option> <option>Quiver</option> <option>Shield</option> </select> </div>

    Read the article

  • how to put result in the end using jquery

    - by Martty Rox
    my code is : html with the js in section please ch3eck it out i need to prodeuce the result of the form of five questions and display it in the last section of the page where am i going wrong am using innerhtml js thing to produce the result in the last section <div id="section1"> <script type="text/javascript"> function changeText2(){ alert("working"); var count1=0; var a=document.forms["myForm"]["drop1"].value; var b=document.forms["myForm"]["drop2"].value; alert(document.forms["myForm"]["drop2"].value); var c=document.forms["myForm"]["drop3"].value; var d=document.forms["myForm"]["drop4"].value; var e=document.forms["myForm"]["drop5"].value; var f=document.forms["myForm"]["drop6"].value; if(a===2) { count1++; alert(count1); } else { alert("lit"); } if(b===2) { count1++; } else { alert("lit"); } if(c===2) { count1++; } else { alert("lit"); } if(d===2) { count1++; } else { alert("lit"); } if(e===2) { count1++; } else alert("lit"); } alert(count1); document.getElementById('boldStuff2').innerHTML = count1; </script> <form name="myForm"> <p>1)&#x00A0;&#x00A0;Who won the 1993 &#x201C;King of the Ring&#x201D;?</p> <div> <select id="f1" name="drop1"> <option value="0" selected="selected">-- Select -- </option> <option value="1">Owen Hart </option> <option value="2">Bret Hart </option> <option value="3">Edge </option> <option value="4">Mabel </option> </select> </div> <!--que1--> <p>2)&#x00A0;&#x00A0;What NHL goaltender has the most career wins?</p> <div> <select id="f2" name="drop2"> <option value="0" selected="selected">-- Select -- </option> <option value="1">Grant Fuhr </option> <option value="2">Patrick Roy </option> <option value="3">Chris Osgood </option> <option value="4">Mike Vernon</option> </select> </div> <!--que2--> <p>3)&#x00A0;&#x00A0;What Major League Baseball player holds the record for all-time career high batting average?</p> <div> <select id="f3" name="drop3"> <option value="0" selected="selected">-- Select -- </option> <option value="1">Ty Cobb </option> <option value="2">Larry Walker </option> <option value="3">Jeff Bagwell </option> <option value="4">Frank Thomas</option> </select> </div> <!--que3--> <p>4)&#x00A0;&#x00A0;Who among the following is NOT associated with billiards in India?</p> <div> <select id="f4" name="drop4" > <option value="0" selected="selected">-- Select -- </option> <option value="1">Subash Agrawal </option> <option value="2">Ashok Shandilya </option> <option value="3">Manoj Kothari </option> <option value="4">Mihir Sen</option> </select> </div> <!--que4--> <p>5)&#x00A0;&#x00A0;Which cricketer died on the field in Bangladesh while playing for Abahani Club?</p> <div> <select id="f5" name="drop5" > <option value="0" selected="selected">-- Select -- </option> <option value="1">Subhash Gupte</option> <option value="2">M.L.Jaisimha</option> <option value="3">Lala Amarnath</option> <option value="4">Raman Lamba</option> </select> </div> <!--que5--> <a href="#services" class="page_nav_btn next"><input type='button' onclick='changeText2()' value='NEXT'/></a> </form> </div> <div id="section2"> </div> ... <div id="results"> <b id='boldStuff2'>fff ggg</b> </div> need to display the results of each section at the last div as shown in script... js for first section not working plz some help me where am i going wrong....

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

< Previous Page | 8 9 10 11 12