Search Results

Search found 11042 results on 442 pages for 'side'.

Page 342/442 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • CentOS: AJAX/jQuery not working

    - by Australiya
    In a nutshell, I have an unmanaged VPS. At one time, it had Ubuntu 10.10 server on it, then I reinstalled it with CentOS 6 and updated it to CentOS 6.2. Now, the problem is, the AJAX/jQuery shoutbox has ceased working (I assume it uses one of the two to inject itself into a div and then refresh when new messages are posted, I'm not sure, I didn't write these), and the plug-board script now shows me a lime green blank page. No changes to the source codes have been made, and they are in the location they expect themselves to be. I have Apache2, MySQL 5, PHP 5, and I did install the php-xml libraries. What am I missing? It's gotta be server side because the scripts themselves are fine, if I move them to a different server they work just fine. I'm not getting any errors related to this in the error_log file. Thanks in advance! Edit: If you want, you can look at the plugboard at kazeshini.net/plugboard and there's an installation of the chatbox at silverlotus.kazeshini.net/yshout/example, I know nothing about scripts and debugging so better someone else looks at it than someone that doesn't know what they're looking for.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: OpenDNS fails to resolve mail.google.com to its IP, My ISP sends me a page showing search results for 'mail.google.com' Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: 1. OpenDNS fails to resolve mail.google.com to its IP, 2. My ISP sends me a page showing search results for 'mail.google.com' 3. Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding 4. After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. 5. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • Deploying and publishing my first asp.net mvc 3 web application

    - by john G
    I want to deploy and publish my first asp.net mvc 3 web application at my client side (the client is a small office with 2-4 employees that need to access the application) , currently i finished developing my web application using the free Microsoft Visual Web Developer 2010 express and the free SQL Server 2008 R2 Express database . So my concerns are :- 1. The free sql express database that i am currently using have a limitation of 10 GB i think So i want to buy SQL server for small business to remove the database limitations. So my questions that i need help with them are:- **1. If i use the sql server for small business then will my web application have other limitations on production i am unaware of ? 2. Will be using SQL server for small business my right choice? baring in my that the system will be used by 2-4 clients only? 3. How much does “approximate ” the sql server database will cost in US dollars? 4. Are there any other software that i need to buy to be able to deploy and publish the application on intranet and the internet ?** Appreciate any help Best Regards

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • How to Find Out Who Made an ISO Disk?

    - by Qwertyfshag
    If a file is saved using Microsoft word or some other type of program, you can right click on the file to find the properties, which will indicate the computer that created the file. Is there anyway to find out who created an ISO disk image on a CD or DVD? I assume that there should be no meta data on the disk because an ISO disk image should be an exact duplicate of the original. Is my assumption correct? To illustrate with an example, let's say you found a CD at a cafe or something. You decide to look at the CD with your computer. You find out that it is an "Ubuntu live CD" that was obviously created from an ISO file. Is there any way to find out who burned the CD? Or, on the flip side, let's say you were the one that burned the "Ubuntu live CD" and you lost it. Would somebody be able to know that it was you who made the CD? Can they get any info about the maker?

    Read the article

  • Cannot open /dev/rfcomm1 : Host is down

    - by srj0408
    I am working on raspberry PI and on Bluetooth. I am using old raspberry pi kernel as the new one has got some bugs that were not resolved with respect to the bluez daemon. At present my kernel version is 3.6.11. I am using a USB bluetooth dongle and my sole purpose is to auto connect the bluetooth dongle when ever it is in range. For that i think i have to run a script in the backend on RPI that will keep on checking the existence of usb bluetooth dongle. I started from the very scratch. I installed bluez daemon using apt-get install bluetooth bluez utils blueman and then i used hciconfig which gives me that my bluetooth usb dongle is working fine. But when i did hcitool scan , it give me no device in range even though my Serial bluetooth Device was on. I wasn't able to find any device in vicinity. Also when i unplugged and plug the USB dongle again, i was able to scan the serial device , but when i repeat the process, i find the earlier condition of not finding any deice. I had find another useful link, but that need address of the bluetooth device that need to be connected. I want to automate this using hcitool scan, storing the output to the a file and then comparing it with already paired devices and their name. For that i need to figure out why hcitool scan is sometime working and sometime not. ? Can some one help me in figuring out why this is happening. Is there any problem on hardware side i.e Bluetooth dongle is buggy or i had some problem in bluez utils. Edit 1: While as of now, hcitool scan is giving me my remote device address but still i am getting the same issue of HOUST IS DOWN, '/dev/rfcomm1'. I am really not getting any idea of what to be done.

    Read the article

  • Silverlight 4 + WCF RIA - Data Service Design Best Practices

    - by Chadd Nervig
    Hey all. I realize this is a rather long question, but I'd really appreciate any help from anyone experienced with RIA services. Thanks! I'm working on a Silverlight 4 app that views data from the server. I'm relatively inexperienced with RIA Services, so have been working through the tasks of getting the data I need down to the client, but every new piece I add to the puzzle seems to be more and more problematic. I feel like I'm missing some basic concepts here, and it seems like I'm just 'hacking' pieces on, in time-consuming ways, each one breaking the previous ones as I try to add them. I'd love to get the feedback of developers experienced with RIA services, to figure out the intended way to do what I'm trying to do. Let me lay out what I'm trying to do: First, the data. The source of this data is a variety of sources, primarily created by a shared library which reads data from our database, and exposes it as POCOs (Plain Old CLR Objects). I'm creating my own POCOs to represent the different types of data I need to pass between server and client. DataA - This app is for viewing a certain type of data, lets call DataA, in near-realtime. Every 3 minutes, the client should pull data down from the server, of all the new DataA since the last time it requested data. DataB - Users can view the DataA objects in the app, and may select one of them from the list, which displays additional details about that DataA. I'm bringing these extra details down from the server as DataB. DataC - One of the things that DataB contains is a history of a couple important values over time. I'm calling each data point of this history a DataC object, and each DataB object contains many DataCs. The Data Model - On the server side, I have a single DomainService: [EnableClientAccess] public class MyDomainService : DomainService { public IEnumerable<DataA> GetDataA(DateTime? startDate) { /*Pieces together the DataAs that have been created since startDate, and returns them*/ } public DataB GetDataB(int dataAID) { /*Looks up the extended info for that dataAID, constructs a new DataB with that DataA's data, plus the extended info (with multiple DataCs in a List<DataC> property on the DataB), and returns it*/ } //Not exactly sure why these are here, but I think it //wouldn't compile without them for some reason? The data //is entirely read-only, so I don't need to update. public void UpdateDataA(DataA dataA) { throw new NotSupportedException(); } public void UpdateDataB(DataB dataB) { throw new NotSupportedException(); } } The classes for DataA/B/C look like this: [KnownType(typeof(DataB))] public partial class DataA { [Key] [DataMember] public int DataAID { get; set; } [DataMember] public decimal MyDecimalA { get; set; } [DataMember] public string MyStringA { get; set; } [DataMember] public DataTime MyDateTimeA { get; set; } } public partial class DataB : DataA { [Key] [DataMember] public int DataAID { get; set; } [DataMember] public decimal MyDecimalB { get; set; } [DataMember] public string MyStringB { get; set; } [Include] //I don't know which of these, if any, I need? [Composition] [Association("DataAToC","DataAID","DataAID")] public List<DataC> DataCs { get; set; } } public partial class DataC { [Key] [DataMember] public int DataAID { get; set; } [Key] [DataMember] public DateTime Timestamp { get; set; } [DataMember] public decimal MyHistoricDecimal { get; set; } } I guess a big question I have here is... Should I be using Entities instead of POCOs? Are my classes constructed correctly to be able to pass the data down correctly? Should I be using Invoke methods instead of Query (Get) methods on the DomainService? On the client side, I'm having a number of issues. Surprisingly, one of my biggest ones has been threading. I didn't expect there to be so many threading issues with MyDomainContext. What I've learned is that you only seem to be able to create MyDomainContextObjects on the UI thread, all of the queries you can make are done asynchronously only, and that if you try to fake doing it synchronously by blocking the calling thread until the LoadOperation finishes, you have to do so on a background thread, since it uses the UI thread to make the query. So here's what I've got so far. The app should display a stream of the DataA objects, spreading each 3min chunk of them over the next 3min (so they end up displayed 3min after the occurred, looking like a continuous stream, but only have to be downloaded in 3min bursts). To do this, the main form initializes, creates a private MyDomainContext, and starts up a background worker, which continuously loops in a while(true). On each loop, it checks if it has any DataAs left over to display. If so, it displays that Data, and Thread.Sleep()s until the next DataA is scheduled to be displayed. If it's out of data, it queries for more, using the following methods: public DataA[] GetDataAs(DateTime? startDate) { _loadOperationGetDataACompletion = new AutoResetEvent(false); LoadOperation<DataA> loadOperationGetDataA = null; loadOperationGetDataA = _context.Load(_context.GetDataAQuery(startDate), System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false); loadOperationGetDataA.Completed += new EventHandler(loadOperationGetDataA_Completed); _loadOperationGetDataACompletion.WaitOne(); List<DataA> dataAs = new List<DataA>(); foreach (var dataA in loadOperationGetDataA.Entities) dataAs.Add(dataA); return dataAs.ToArray(); } private static AutoResetEvent _loadOperationGetDataACompletion; private static void loadOperationGetDataA_Completed(object sender, EventArgs e) { _loadOperationGetDataACompletion.Set(); } Seems kind of clunky trying to force it into being synchronous, but since this already is on a background thread, I think this is OK? So far, everything actually works, as much of a hack as it seems like it may be. It's important to note that if I try to run that code on the UI thread, it locks, because it waits on the WaitOne() forever, locking the thread, so it can't make the Load request to the server. So once the data is displayed, users can click on one as it goes by to fill a details pane with the full DataB data about that object. To do that, I have the the details pane user control subscribing to a selection event I have setup, which gets fired when the selection changes (on the UI thread). I use a similar technique there, to get the DataB object: void SelectionService_SelectedDataAChanged(object sender, EventArgs e) { DataA dataA = /*Get the selected DataA*/; MyDomainContext context = new MyDomainContext(); var loadOperationGetDataB = context.Load(context.GetDataBQuery(dataA.DataAID), System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false); loadOperationGetDataB.Completed += new EventHandler(loadOperationGetDataB_Completed); } private void loadOperationGetDataB_Completed(object sender, EventArgs e) { this.DataContext = ((LoadOperation<DataB>)sender).Entities.SingleOrDefault(); } Again, it seems kinda hacky, but it works... except on the DataB that it loads, the DataCs list is empty. I've tried all kinds of things there, and I don't see what I'm doing wrong to allow the DataCs to come down with the DataB. I'm about ready to make a 3rd query for the DataCs, but that's screaming even more hackiness to me. It really feels like I'm fighting against the grain here, like I'm doing this in an entirely unintended way. If anyone could offer any assistance, and point out what I'm doing wrong here, I'd very much appreciate it! Thanks!

    Read the article

  • How to send audio data from Java Applet to Rails controller

    - by cooldude
    Hi, I have to send the audio data in byte array obtain by recording from java applet at the client side to rails server at the controller in order to save. So, what encoding parameters at the applet side be used and in what form the audio data be converted like String or byte array so that rails correctly recieve data and then I can save that data at the rails in the file. As currently the audio file made by rails controller is not playing. It is the following ERROR : LAVF_header: av_open_input_stream() failed while playing with the mplayer. Here is the Java Code: package networksocket; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JApplet; import java.net.*; import java.io.*; import java.awt.event.*; import java.awt.*; import java.sql.*; import javax.swing.*; import javax.swing.border.*; import java.awt.*; import java.util.Properties; import javax.swing.plaf.basic.BasicSplitPaneUI.BasicHorizontalLayoutManager; import sun.awt.HorizBagLayout; import sun.awt.VerticalBagLayout; import sun.misc.BASE64Encoder; /** * * @author mukand */ public class Urlconnection extends JApplet implements ActionListener { /** * Initialization method that will be called after the applet is loaded * into the browser. */ public BufferedInputStream in; public BufferedOutputStream out; public String line; public FileOutputStream file; public int bytesread; public int toread=1024; byte b[]= new byte[toread]; public String f="FINISH"; public String match; public File fileopen; public JTextArea jTextArea; public Button refreshButton; public HttpURLConnection urlConn; public URL url; OutputStreamWriter wr; BufferedReader rd; @Override public void init() { // TODO start asynchronous download of heavy resources //textField= new TextField("START"); //getContentPane().add(textField); JPanel p = new JPanel(); jTextArea= new JTextArea(1500,1500); p.setLayout(new GridLayout(1,1, 1,1)); p.add(new JLabel("Server Details")); p.add(jTextArea); Container content = getContentPane(); content.setLayout(new GridBagLayout()); // Used to center the panel content.add(p); jTextArea.setLineWrap(true); refreshButton = new java.awt.Button("Refresh"); refreshButton.reshape(287,49,71,23); refreshButton.setFont(new Font("Dialog", Font.PLAIN, 12)); refreshButton.addActionListener(this); add(refreshButton); Properties properties = System.getProperties(); properties.put("http.proxyHost", "netmon.iitb.ac.in"); properties.put("http.proxyPort", "80"); } @Override public void actionPerformed(ActionEvent e) { try { url = new URL("http://localhost:3000/audio/audiorecieve"); urlConn = (HttpURLConnection)url.openConnection(); //String login = "mukandagarwal:rammstein$"; //String encodedLogin = new BASE64Encoder().encodeBuffer(login.getBytes()); //urlConn.setRequestProperty("Proxy-Authorization",login); urlConn.setRequestMethod("POST"); // urlConn.setRequestProperty("Content-Type", //"application/octet-stream"); //urlConn.setRequestProperty("Content-Type","audio/mpeg");//"application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Type","application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Length", "" + // Integer.toString(urlParameters.getBytes().length)); urlConn.setRequestProperty("Content-Language", "UTF-8"); urlConn.setDoOutput(true); urlConn.setDoInput(true); byte bread[]=new byte[2048]; int iread; char c; String data=URLEncoder.encode("key1", "UTF-8")+ "="; //String data="key1="; FileInputStream fileread= new FileInputStream("//home//mukand//Hellion.ogg");//Dogs.mp3");//Desktop//mausam1.mp3"); while((iread=fileread.read(bread))!=-1) { //data+=(new String()); /*for(int i=0;i<iread;i++) { //c=(char)bread[i]; System.out.println(bread[i]); }*/ data+= URLEncoder.encode(new String(bread,iread), "UTF-8");//new String(new String(bread));// // data+=new String(bread,iread); } //urlConn.setRequestProperty("Content-Length",Integer.toString(data.getBytes().length)); System.out.println(data); //data+=URLEncoder.encode("mukand", "UTF-8"); //data += "&" + URLEncoder.encode("key2", "UTF-8") + "=" + URLEncoder.encode("value2", "UTF-8"); //data="key1="; wr = new OutputStreamWriter(urlConn.getOutputStream());//urlConn.getOutputStream(); //if((iread=fileread.read(bread))!=-1) // wr.write(bread,0,iread); wr.write(data); wr.flush(); fileread.close(); jTextArea.append("Send"); // Get the response rd = new BufferedReader(new InputStreamReader(urlConn.getInputStream())); while ((line = rd.readLine()) != null) { jTextArea.append(line); } wr.close(); rd.close(); //jTextArea.append("click"); } catch (MalformedURLException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } catch (IOException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } } @Override public void start() { } @Override public void stop() { } @Override public void destroy() { } // TODO overwrite start(), stop() and destroy() methods } Here is the Rails controller function for recieving: def audiorecieve puts "///////////////////////////////////////******RECIEVED*******////" puts params[:key1]#+" "+params[:key2] data=params[:key1] #request.env('RAW_POST_DATA') file=File.new("audiodata.ogg", 'w') file.write(data) file.flush file.close puts "////**************DONE***********//////////////////////" end Please reply quickly

    Read the article

  • What pseudo-operators exist in Perl 5?

    - by Chas. Owens
    I am currently documenting all of Perl 5's operators (see the perlopref GitHub project) and I have decided to include Perl 5's pseudo-operators as well. To me, a pseudo-operator in Perl is anything that looks like an operator, but is really more than one operator or a some other piece of syntax. I have documented the four I am familiar with already: ()= the countof operator =()= the goatse/countof operator ~~ the scalar context operator }{ the Eskimo-kiss operator What other names exist for these pseudo-operators, and do you know of any pseudo-operators I have missed? =head1 Pseudo-operators There are idioms in Perl 5 that appear to be operators, but are really a combination of several operators or pieces of syntax. These pseudo-operators have the precedence of the constituent parts. =head2 ()= X =head3 Description This pseudo-operator is the list assignment operator (aka the countof operator). It is made up of two items C<()>, and C<=>. In scalar context it returns the number of items in the list X. In list context it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count = ()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats = ()= $string =~ /cat/g; #$cats is now 3 print scalar( ()= f() ), "\n"; #prints "5\n" =head3 See also L</X = Y> and L</X =()= Y> =head2 X =()= Y This pseudo-operator is often called the goatse operator for reasons better left unexamined; it is also called the list assignment or countof operator. It is made up of three items C<=>, C<()>, and C<=>. When X is a scalar variable, the number of items in the list Y is returned. If X is an array or a hash it it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count =()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats =()= $string =~ /cat/g; #$cats is now 3 =head3 See also L</=> and L</()=> =head2 ~~X =head3 Description This pseudo-operator is named the scalar context operator. It is made up of two bitwise negation operators. It provides scalar context to the expression X. It works because the first bitwise negation operator provides scalar context to X and performs a bitwise negation of the result; since the result of two bitwise negations is the original item, the value of the original expression is preserved. With the addition of the Smart match operator, this pseudo-operator is even more confusing. The C<scalar> function is much easier to understand and you are encouraged to use it instead. =head3 Example my @a = qw/a b c d/; print ~~@a, "\n"; #prints 4 =head3 See also L</~X>, L</X ~~ Y>, and L<perlfunc/scalar> =head2 X }{ Y =head3 Description This pseudo-operator is called the Eskimo-kiss operator because it looks like two faces touching noses. It is made up of an closing brace and an opening brace. It is used when using C<perl> as a command-line program with the C<-n> or C<-p> options. It has the effect of running X inside of the loop created by C<-n> or C<-p> and running Y at the end of the program. It works because the closing brace closes the loop created by C<-n> or C<-p> and the opening brace creates a new bare block that is closed by the loop's original ending. You can see this behavior by using the L<B::Deparse> module. Here is the command C<perl -ne 'print $_;'> deparsed: LINE: while (defined($_ = <ARGV>)) { print $_; } Notice how the original code was wrapped with the C<while> loop. Here is the deparsing of C<perl -ne '$count++ if /foo/; }{ print "$count\n"'>: LINE: while (defined($_ = <ARGV>)) { ++$count if /foo/; } { print "$count\n"; } Notice how the C<while> loop is closed by the closing brace we added and the opening brace starts a new bare block that is closed by the closing brace that was originally intended to close the C<while> loop. =head3 Example # count unique lines in the file FOO perl -nle '$seen{$_}++ }{ print "$_ => $seen{$_}" for keys %seen' FOO # sum all of the lines until the user types control-d perl -nle '$sum += $_ }{ print $sum' =head3 See also L<perlrun> and L<perlsyn> =cut

    Read the article

  • Why is this class re-initialized every time?

    - by pinnacler
    I have 4 files and the code 'works' as expected. I try to clean everything up, place code into functions, etc... and everything looks fine... and it doesn't work. Can somebody please explain why MatLab is so quirky... or am I just stupid? Normally, I type terminator = simulation(100,20,0,0,0,1); terminator.animate(); and it should produce a map of trees with the terminator walking around in a forest. Everything rotates to his perspective. When I break it into functions... everything ceases to work. I really only changed a few lines of code, shown in comments. Code that works: classdef simulation properties landmarks robot end methods function obj = simulation(mapSize, trees, x,y,heading,velocity) obj.landmarks = landmarks(mapSize, trees); obj.robot = robot(x,y,heading,velocity); end function animate(obj) %Setup Plots fig=figure; xlabel('meters'), ylabel('meters') set(fig, 'name', 'Phil''s AWESOME 80''s Robot Simulator') xymax = obj.landmarks.mapSize*3; xymin = -(obj.landmarks.mapSize*3); l=scatter([0],[0],'bo'); axis([xymin xymax xymin xymax]); obj.landmarks.apparentPositions %Simulation Loop THIS WAS ORGANIZED for n = 1:720, %Calculate and Set Heading/Location obj.robot.headingChange = navigate(n); %Update Position obj.robot.heading = obj.robot.heading + obj.robot.headingChange; obj.landmarks.heading = obj.robot.heading; y = cosd(obj.robot.heading); x = sind(obj.robot.heading); obj.robot.x = obj.robot.x + (x*obj.robot.velocity); obj.robot.y = obj.robot.y + (y*obj.robot.velocity); obj.landmarks.x = obj.robot.x; obj.landmarks.y = obj.robot.y; %Animate set(l,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); rectangle('Position',[-2,-2,4,4]); drawnow end end end end ----------- classdef landmarks properties fixedPositions %# positions in a fixed coordinate system. [ x, y ] mapSize = 10; %Map Size. Value is side of square x=0; y=0; heading=0; headingChange=0; end properties (Dependent) apparentPositions end methods function obj = landmarks(mapSize, numberOfTrees) obj.mapSize = mapSize; obj.fixedPositions = obj.mapSize * rand([numberOfTrees, 2]) .* sign(rand([numberOfTrees, 2]) - 0.5); end function apparent = get.apparentPositions(obj) %-STILL ROTATES AROUND ORIGINAL ORIGIN currentPosition = [obj.x ; obj.y]; apparent = bsxfun(@minus,(obj.fixedPositions)',currentPosition)'; apparent = ([cosd(obj.heading) -sind(obj.heading) ; sind(obj.heading) cosd(obj.heading)] * (apparent)')'; end end end ---------- classdef robot properties x y heading velocity headingChange end methods function obj = robot(x,y,heading,velocity) obj.x = x; obj.y = y; obj.heading = heading; obj.velocity = velocity; end end end ---------- function headingChange = navigate(n) %steeringChange = 5 * rand(1) * sign(rand(1) - 0.5); Most chaotic shit %Draw an S if n <270 headingChange=1; elseif n<540 headingChange=-1; elseif n<720 headingChange=1; else headingChange=1; end end Code that does not work... classdef simulation properties landmarks robot end methods function obj = simulation(mapSize, trees, x,y,heading,velocity) obj.landmarks = landmarks(mapSize, trees); obj.robot = robot(x,y,heading,velocity); end function animate(obj) %Setup Plots fig=figure; xlabel('meters'), ylabel('meters') set(fig, 'name', 'Phil''s AWESOME 80''s Robot Simulator') xymax = obj.landmarks.mapSize*3; xymin = -(obj.landmarks.mapSize*3); l=scatter([0],[0],'bo'); axis([xymin xymax xymin xymax]); obj.landmarks.apparentPositions %Simulation Loop for n = 1:720, %Calculate and Set Heading/Location %Update Position headingChange = navigate(n); obj.robot.updatePosition(headingChange); obj.landmarks.updatePerspective(obj.robot.heading, obj.robot.x, obj.robot.y); %Animate set(l,'XData',obj.landmarks.apparentPositions(:,1),'YData',obj.landmarks.apparentPositions(:,2)); rectangle('Position',[-2,-2,4,4]); drawnow end end end end ----------------- classdef landmarks properties fixedPositions; %# positions in a fixed coordinate system. [ x, y ] mapSize; %Map Size. Value is side of square x; y; heading; headingChange; end properties (Dependent) apparentPositions end methods function obj = createLandmarks(mapSize, numberOfTrees) obj.mapSize = mapSize; obj.fixedPositions = obj.mapSize * rand([numberOfTrees, 2]) .* sign(rand([numberOfTrees, 2]) - 0.5); end function apparent = get.apparentPositions(obj) %-STILL ROTATES AROUND ORIGINAL ORIGIN currentPosition = [obj.x ; obj.y]; apparent = bsxfun(@minus,(obj.fixedPositions)',currentPosition)'; apparent = ([cosd(obj.heading) -sind(obj.heading) ; sind(obj.heading) cosd(obj.heading)] * (apparent)')'; end function updatePerspective(obj,tempHeading,tempX,tempY) obj.heading = tempHeading; obj.x = tempX; obj.y = tempY; end end end ----------------- classdef robot properties x y heading velocity end methods function obj = robot(x,y,heading,velocity) obj.x = x; obj.y = y; obj.heading = heading; obj.velocity = velocity; end function updatePosition(obj,headingChange) obj.heading = obj.heading + headingChange; tempy = cosd(obj.heading); tempx = sind(obj.heading); obj.x = obj.x + (tempx*obj.velocity); obj.y = obj.y + (tempy*obj.velocity); end end end The navigate function is the same... I would appreciate any help as to why things aren't working. All I did was take the code from the first section from under comment: %Simulation Loop THIS WAS ORGANIZED and break it into 2 functions. One in robot and one in landmarks. Is a new instance created every time because it's constantly printing the same heading for this line int he robot class obj.heading = obj.heading + headingChange;

    Read the article

  • Block Hover Effect - Why doesn't it work correctly in FF3.6?

    - by Brian Ojeda
    Why doesn't following code work correctly in FireFox 3.6? I have tested in IE7, IE8, and Chrome with out any issues. Issue: The first block hover link (the table's 3rd row) doesn't apply the same style/effect as the following below it. Notes: I am trying to create my own table framework. This project is something I am doing to learn more about CSS. Before I started, I thought I knew a lot about CSS. However, to my surprise I was wrong. Who knew? Moving on... As side note, I do not want to take the time to support IE6. So, if you see a problem related IE6, please don't waste your time telling. One another side note, the following style script and HTML listed when this question is strip-down/bare-bone of the complete CSS/HTML. It should be enough to assist me. CSS: /* Main Properties */ .ojtable{display:block;clear:both; margin-left:auto;margin-right:auto; margin-top:0px; width:650px;} .ojtable-row, .ojtable-head {display:block;clear:both;position:relative; margin-left:0px;margin-right:0px;padding:0px;} .col-1, .col-2, .col-3, .col-4, .col-5, .col-6, .col-7, .col-8, .col-9, .col-10, .col-11, .col-12, .col-13, .col-1-b1, .col-2-b1, .col-3-b1, .col-4-b1, .col-5-b1, .col-6-b1, .col-7-b1, .col-8-b1, .col-9-b1, .col-10-b1, .col-11-b1, .col-12-b1, .col-13-b1 {display:block;float:left;position:relative; margin-left:0px;margin-right:0px;padding:0px 2px;} /* Border */ .border-b1{border:solid #000000; border-width:0 0 1px 0;} .border-ltr{border:solid #000000; border-width:1px 1px 0 1px;} /* Header */ .ojtable-row{width:100%;} .ojtable-head{width:100%;} /* No Border*/ .col-2{width:96px;} /* Border: 1px */ .col-2-b1{width:95px;} .col-7-b1{width:345px;} /*--- Clear Floated Elements ---*/ /* Credit: http://sonspring.com/journal/clearing-floats */ .clear { clear: both; display: block; overflow: hidden; visibility: hidden; width: 0; height: 0; } /* Credit: http://perishablepress.com/press/2008/02/05/ lessons-learned-concerning-the-clearfix-css-hack/ */ .clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; } .clearfix { display:inline-block; } /* start commented backslash hack \*/ * html .clearfix { height: 1%; } .clearfix { display: block; } /* close commented backslash hack */ /*--- Hover Effect for the Tables ---*/ a {text-decoration:none;} * .ojtable a .ojtable-row{width:650px; display:block; text-decoration:none;} * html .ojtable a .ojtable-row {width:650px;}/* Hover Fix for IE */ .ojtable a:hover .ojtable-row{background:#AAAAAA; cursor:pointer;} HTML: <div class="ojtable border-ltr clearfix"> <div class="ojtable-row border-b1 clearfix"> <div class="col-13">Newest Blogs</div> </div> <div class="ojtable-row border-b1 clearfix"> <div class="col-7-b1 border-r1">Name</div> <div class="col-4-b1 border-r1">Creater's Name</div> <div class="col-2">Dated Created</div> </div> <a href="#"><div class="ojtable-row border-b1 clearfix"> <div class="col-7-b1 border-r1">Why jQuery?</div> <div class="col-4-b1 border-r1">Gramcracker</div> <div class="col-2">Mar 11 2010</div> </div></a> <a href="#"><div class="ojtable-row border-b1 clearfix"> <div class="col-7-b1 border-r1">Thank You For Your Help</div> <div class="col-4-b1 border-r1">O'Hater</div> <div class="col-2">Nov 2 2009</div> </div></a> <a href="#"><div class="ojtable-row border-b1 clearfix"> <div class="col-7-b1 border-r1">Click Me! Hahaha!</div> <div class="col-4-b1 border-r1">Brian Ojeda</div> <div class="col-2">Nov 29 2008</div> </div></a> <a href="#"><div class="ojtable-row border-b1 clearfix"> <div class="col-7-b1 border-r1">Moment of Zen</div> <div class="col-4-b1 border-r1">Jedi</div> <div class="col-2">Mar 11 2010</div> </div></a> <a href="#"><div class="ojtable-row border-b1 clearfix"> <div class="col-7-b1 border-r1"></div> <div class="col-4-b1 border-r1">SGT OJ</div> <div class="col-2">Mar 11 2010</div> </div></a> </div> <!-- End of Table --> PS: Thank you for assistant, if you do choose to help.

    Read the article

  • Customized listfield with image displaying from a url

    - by arunabha
    I am displaying a customized list field with text on the right side and image on the left side.The image comes from a URL dynamically.Initially i am placing a blank image on the left of the list field,then call URLBitmapField class's setURL method,which actually does the processing and places the processed image on top of the blank image.The image gets displayed on the list field,but to see that processed image i need to press any key or click on the list field items.I want the processed image to be displayed automatically in the list field after the processing.Can anyone tell me where i am getting wrong? import java.util.Vector; import net.rim.device.api.system.Bitmap; import net.rim.device.api.system.Display; import net.rim.device.api.ui.ContextMenu; import net.rim.device.api.ui.DrawStyle; import net.rim.device.api.ui.Field; import net.rim.device.api.ui.Font; import net.rim.device.api.ui.Graphics; import net.rim.device.api.ui.Manager; import net.rim.device.api.ui.MenuItem; import net.rim.device.api.ui.UiApplication; import net.rim.device.api.ui.component.BitmapField; import net.rim.device.api.ui.component.Dialog; import net.rim.device.api.ui.component.LabelField; import net.rim.device.api.ui.component.ListField; import net.rim.device.api.ui.component.ListFieldCallback; import net.rim.device.api.ui.component.NullField; import net.rim.device.api.ui.container.FullScreen; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.ui.container.VerticalFieldManager; import net.rim.device.api.util.Arrays; import net.rim.device.api.ui.component.ListField; public class TaskListField extends UiApplication { // statics // ------------------------------------------------------------------ public static void main(String[] args) { TaskListField theApp = new TaskListField(); theApp.enterEventDispatcher(); } public TaskListField() { pushScreen(new TaskList()); } } class TaskList extends MainScreen implements ListFieldCallback { private Vector rows; private Bitmap p1; private Bitmap p2; private Bitmap p3; String Task; ListField listnew = new ListField(); private VerticalFieldManager metadataVFM; TableRowManager row; public TaskList() { super(); URLBitmapField artistImgField; listnew.setRowHeight(80); listnew.setCallback(this); rows = new Vector(); for (int x = 0; x <3; x++) { row = new TableRowManager(); artistImgField = new URLBitmapField(Bitmap .getBitmapResource("res/images/bg.jpg")); row.add(artistImgField); String photoURL = "someimagefrmurl.jpg"; Log.info(photoURL); // strip white spaces in the url, which is causing the // images to not display properly for (int i = 0; i < photoURL.length(); i++) { if (photoURL.charAt(i) == ' ') { photoURL = photoURL.substring(0, i) + "%20" + photoURL.substring(i + 1, photoURL.length()); } } Log.info("Processed URL: " + photoURL); artistImgField.setURL(photoURL); LabelField task = new LabelField("Display"); row.add(task); LabelField task1 = new LabelField( "Now Playing" + String.valueOf(x)); Font myFont = Font.getDefault().derive(Font.PLAIN, 12); task1.setFont(myFont); row.add(task1); rows.addElement(row); } listnew.setSize(rows.size()); this.add(listnew); //listnew.invalidate(); } // ListFieldCallback Implementation public void drawListRow(ListField listField, Graphics g, int index, int y, int width) { TableRowManager rowManager = (TableRowManager) rows.elementAt(index); rowManager.drawRow(g, 0, y, width, listnew.getRowHeight()); } protected void drawFocus(Graphics graphics, boolean on) { } private class TableRowManager extends Manager { public TableRowManager() { super(0); } // Causes the fields within this row manager to be layed out then // painted. public void drawRow(Graphics g, int x, int y, int width, int height) { // Arrange the cell fields within this row manager. layout(width, height); // Place this row manager within its enclosing list. setPosition(x, y); // Apply a translating/clipping transformation to the graphics // context so that this row paints in the right area. g.pushRegion(getExtent()); // Paint this manager's controlled fields. subpaint(g); g.setColor(0x00CACACA); g.drawLine(0, 0, getPreferredWidth(), 0); // Restore the graphics context. g.popContext(); } // Arrages this manager's controlled fields from left to right within // the enclosing table's columns. protected void sublayout(int width, int height) { // set the size and position of each field. int fontHeight = Font.getDefault().getHeight(); int preferredWidth = getPreferredWidth(); // start with the Bitmap Field of the priority icon Field field = getField(0); layoutChild(field, 146,80); setPositionChild(field, 0, 0); // set the task name label field field = getField(1); layoutChild(field, preferredWidth - 16, fontHeight + 1); setPositionChild(field, 149, 3); // set the list name label field field = getField(2); layoutChild(field, 150, fontHeight + 1); setPositionChild(field, 149, fontHeight + 6); setExtent(360, 480); } // The preferred width of a row is defined by the list renderer. public int getPreferredWidth() { return listnew.getWidth(); } // The preferred height of a row is the "row height" as defined in the // enclosing list. public int getPreferredHeight() { return listnew.getRowHeight(); } } public Object get(ListField listField, int index) { // TODO Auto-generated method stub return null; } public int getPreferredWidth(ListField listField) { return 0; } public int indexOfList(ListField listField, String prefix, int start) { // TODO Auto-generated method stub return 0; } }

    Read the article

  • How to know if the client has terminated in sockets

    - by shadyabhi
    Suppose, I have a connected socket after writing this code.. if ((sd = accept(socket_d, (struct sockaddr *)&client_addr, &alen)) < 0) { perror("accept failed\n"); exit(1); } How can I know at the server side that client has exited. My whole program actually does the following.. Accepts a connection from client Starts a new thread that reads messages from that particular client and then broadcast this message to all the connected clients. If you want to see the whole code... In this whole code. I am also struggling with one more problem that whenever I kill a client with Ctrl+C, my server terminates abruptly.. It would be nice if anyone could suggest what the problem is.. #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <signal.h> #include <errno.h> #include <pthread.h> /*CONSTANTS*/ #define DEFAULT_PORT 10000 #define LISTEN_QUEUE_LIMIT 6 #define TOTAL_CLIENTS 10 #define CHAR_BUFFER 256 /*GLOBAL VARIABLE*/ int current_client = 0; int connected_clients[TOTAL_CLIENTS]; extern int errno; void *client_handler(void * socket_d); int main(int argc, char *argv[]) { struct sockaddr_in server_addr;/* structure to hold server's address*/ int socket_d; /* listening socket descriptor */ int port; /* protocol port number */ int option_value; /* needed for setsockopt */ pthread_t tid[TOTAL_CLIENTS]; port = (argc > 1)?atoi(argv[1]):DEFAULT_PORT; /* Socket Server address structure */ memset((char *)&server_addr, 0, sizeof(server_addr)); server_addr.sin_family = AF_INET; /* set family to Internet */ server_addr.sin_addr.s_addr = INADDR_ANY; /* set the local IP address */ server_addr.sin_port = htons((u_short)port); /* Set port */ /* Create socket */ if ( (socket_d = socket(PF_INET, SOCK_STREAM, 0)) < 0) { fprintf(stderr, "socket creation failed\n"); exit(1); } /* Make listening socket's port reusable */ if (setsockopt(socket_d, SOL_SOCKET, SO_REUSEADDR, (char *)&option_value, sizeof(option_value)) < 0) { fprintf(stderr, "setsockopt failure\n"); exit(1); } /* Bind a local address to the socket */ if (bind(socket_d, (struct sockaddr *)&server_addr, sizeof(server_addr)) < 0) { fprintf(stderr, "bind failed\n"); exit(1); } /* Specify size of request queue */ if (listen(socket_d, LISTEN_QUEUE_LIMIT) < 0) { fprintf(stderr, "listen failed\n"); exit(1); } memset(connected_clients,0,sizeof(int)*TOTAL_CLIENTS); for (;;) { struct sockaddr_in client_addr; /* structure to hold client's address*/ int alen = sizeof(client_addr); /* length of address */ int sd; /* connected socket descriptor */ if ((sd = accept(socket_d, (struct sockaddr *)&client_addr, &alen)) < 0) { perror("accept failed\n"); exit(1); } else printf("\n I got a connection from (%s , %d)\n",inet_ntoa(client_addr.sin_addr),ntohs(client_addr.sin_port)); if (pthread_create(&tid[current_client],NULL,(void *)client_handler,(void *)sd) != 0) { perror("pthread_create error"); continue; } connected_clients[current_client]=sd; current_client++; /*Incrementing Client number*/ } return 0; } void *client_handler(void *connected_socket) { int sd; sd = (int)connected_socket; for ( ; ; ) { ssize_t n; char buffer[CHAR_BUFFER]; for ( ; ; ) { if (n = read(sd, buffer, sizeof(char)*CHAR_BUFFER) == -1) { perror("Error reading from client"); pthread_exit(1); } int i=0; for (i=0;i<current_client;i++) { if (write(connected_clients[i],buffer,sizeof(char)*CHAR_BUFFER) == -1) perror("Error sending messages to a client while multicasting"); } } } } My client side is this (Maye be irrelevant while answering my question) #include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <string.h> #include <stdlib.h> void error(char *msg) { perror(msg); exit(0); } void *listen_for_message(void * fd) { int sockfd = (int)fd; int n; char buffer[256]; bzero(buffer,256); printf("YOUR MESSAGE: "); fflush(stdout); while (1) { n = read(sockfd,buffer,256); if (n < 0) error("ERROR reading from socket"); if (n == 0) pthread_exit(1); printf("\nMESSAGE BROADCAST: %sYOUR MESSAGE: ",buffer); fflush(stdout); } } int main(int argc, char *argv[]) { int sockfd, portno, n; struct sockaddr_in serv_addr; struct hostent *server; pthread_t read_message; char buffer[256]; if (argc < 3) { fprintf(stderr,"usage %s hostname port\n", argv[0]); exit(0); } portno = atoi(argv[2]); sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) error("ERROR opening socket"); server = gethostbyname(argv[1]); if (server == NULL) { fprintf(stderr,"ERROR, no such host\n"); exit(0); } bzero((char *) &serv_addr, sizeof(serv_addr)); serv_addr.sin_family = AF_INET; bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length); serv_addr.sin_port = htons(portno); if (connect(sockfd,&serv_addr,sizeof(serv_addr)) < 0) error("ERROR connecting"); bzero(buffer,256); if (pthread_create(&read_message,NULL,(void *)listen_for_message,(void *)sockfd) !=0 ) { perror("error creating thread"); } while (1) { fgets(buffer,255,stdin); n = write(sockfd,buffer,256); if (n < 0) error("ERROR writing to socket"); bzero(buffer,256); } return 0; }

    Read the article

  • IE7 doesn't render part of page until the window resizes or switch between tabs

    - by BlackMael
    I have a problem with IE7. I have a fixed layout for keeping the header and a sidepanel fixed on a page leaving only the "main content" area switch can happily scroll it's content. This layout works perfectly fine for IE6 and IE8, but sometimes one page may start "hiding" the content that should be showing in the "main content" area. The page finishes loading just fine. For a split second IE7 will render the main content just fine and then it will immediately hide it from view.. somewhere.. It would also seem that it only experiences this problem when there is enough content to force the "main content" area to scroll. By resizing the window or switching to another open tab and back again will cause IE7 to show the page as it was intended. Note the same problem does occur with IE8 in compatibility mode, but the page is rendered correctly in IE8 mode. If need be I can attach the basic CSS styling I use, but I first want to see if this is a known issue with IE7. Does IE7 have issues with positioned layout and overflow scrolling that is sometimes likes to forgot to finish rendering the page correctly until some window redraw event forces to finish rendering? Please remember, this exact same layout is used across multiple pages in the site as it is set up in a master page. It is just (in this case) one page that is experiencing this problem. Other pages with the exact same layout do render correctly. Even if the main content is full enough to also scroll. UPDATE: A related question which doesn't have an answer at this point. LATE UPDATE: Adding example masterpage and css Please note this same layout is the same for all the pages in the application. My problem with IE7 only occurs on one such page. All other pages have happily render correctly in IE7. Just one page, using the exact same layout, has issues where it sometimes hides the content in the "work-space" div. The master page <%@ Master Language="VB" CodeFile="MasterPage.master.vb" Inherits="shared_templates_MasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <link rel="Stylesheet" type="text/css" href="~/common/yui/2.7.0/build/reset-fonts/reset-fonts.css" runat="server" /> <link rel="Stylesheet" type="text/css" href="~/shared/css/layout.css" runat="server" /> <asp:ContentPlaceHolder ID="head" runat="server" /> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" /> <div id="app-header"> </div> <div id="side-panel"> </div> <div id="work-space"> <asp:ContentPlaceHolder ID="WorkSpaceContentPlaceHolder" runat="server" /> </div> <div id="status-bar"> <asp:ContentPlaceHolder ID="StatusBarContentPlaceHolder" runat="server" /> </div> </form> </body> </html> The layout.css html { overflow: hidden; } body { overflow: hidden; padding: 0; margin: 0; width: 100%; height: 100%; background-color: white; } body, table, td, th, select, textarea, input { font-family: Tahoma, Arial, Sans-Serif; font-size: 9pt; } p { padding-left: 1em; margin-bottom: 1em; } #app-header { position: absolute; top: 0; left: 0; width: 100%; height: 80px; background-color: #dcdcdc; border-bottom: solid 4px #000; } #side-panel { position: absolute; top: 84px; left: 0px; bottom: 0px; overflow: auto; padding: 0; margin: 0; width: 227px; background-color: #AABCCA; border-right: solid 1px black; background-repeat: repeat-x; padding-top: 5px; } #work-space { position: absolute; top: 84px; left: 232px; right: 0px; padding: 0; margin: 0; bottom: 22px; overflow: auto; background-color: White; } #status-bar { position: absolute; height: 20px; left: 228px; right: 0px; padding: 0; margin: 0; bottom: 0px; border-top: solid 1px #c0c0c0; background-color: #f0f0f0; } The Default.aspx <%@ Page Title="Test" Language="VB" MasterPageFile="~/shared/templates/MasterPage.master" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" %> <asp:Content ID="WorkspaceContent" ContentPlaceHolderID="WorkSpaceContentPlaceHolder" Runat="Server"> Workspace <asp:ListView ID="DemoListView" runat="server" DataSourceID="DemoObjectDataSource" ItemPlaceholderID="DemoPlaceHolder"> <LayoutTemplate> <table style="border: 1px solid #a0a0a0; width: 600px"> <colgroup> <col width="80" /> <col /> <col width="80" /> <col width="120" /> </colgroup> <tbody> <asp:PlaceHolder ID="DemoPlaceHolder" runat="server" /> </tbody> </table> </LayoutTemplate> <ItemTemplate> <tr> <th><%#Eval("ID")%></th> <td><%#Eval("Name")%></td> <td><%#Eval("Size")%></td> <td><%#Eval("CreatedOn", "{0:yyyy-MM-dd HH:mm:ss}")%></td> </tr> </ItemTemplate> </asp:ListView> <asp:ObjectDataSource ID="DemoObjectDataSource" runat="server" OldValuesParameterFormatString="original_{0}" SelectMethod="GetData" TypeName="DemoLogic"> <SelectParameters> <asp:Parameter Name="path" Type="String" /> </SelectParameters> </asp:ObjectDataSource> </asp:Content> <asp:Content ID="StatusContent" ContentPlaceHolderID="StatusBarContentPlaceHolder" Runat="Server"> Ready OK. </asp:Content>

    Read the article

  • Using Handlebars.js issue

    - by Roland
    I'm having a small issue when I'm compiling a template with Handlebars.js . I have a JSON text file which contains an big array with objects : Source ; and I'm using XMLHTTPRequest to get it and then parse it so I can use it when compiling the template. So far the template has the following structure : <div class="product-listing-wrapper"> <div class="product-listing"> <div class="left-side-content"> <div class="thumb-wrapper"> <img src="{{ThumbnailUrl}}"> </div> <div class="google-maps-wrapper"> <div class="google-coordonates-wrapper"> <div class="google-coordonates"> <p>{{LatLon.Lat}}</p> <p>{{LatLon.Lon}}</p> </div> </div> <div class="google-maps-button"> <a class="google-maps" href="#" data-latitude="{{LatLon.Lat}}" data-longitude="{{LatLon.Lon}}">Google Maps</a> </div> </div> </div> <div class="right-side-content"></div> </div> And the following block of code would be the way I'm handling the JS part : $(document).ready(function() { /* Default Javascript Options ~a javascript object which contains all the variables that will be passed to the cluster class */ var default_cluster_options = { animations : ['flash', 'bounce', 'shake', 'tada', 'swing', 'wobble', 'wiggle', 'pulse', 'flip', 'flipInX', 'flipOutX', 'flipInY', 'flipOutY', 'fadeIn', 'fadeInUp', 'fadeInDown', 'fadeInLeft', 'fadeInRight', 'fadeInUpBig', 'fadeInDownBig', 'fadeInLeftBig', 'fadeInRightBig', 'fadeOut', 'fadeOutUp', 'fadeOutDown', 'fadeOutLeft', 'fadeOutRight', 'fadeOutUpBig', 'fadeOutDownBig', 'fadeOutLeftBig', 'fadeOutRightBig', 'bounceIn', 'bounceInUp', 'bounceInDown', 'bounceInLeft', 'bounceInRight', 'bounceOut', 'bounceOutUp', 'bounceOutDown', 'bounceOutLeft', 'bounceOutRight', 'rotateIn', 'rotateInDownLeft', 'rotateInDownRight', 'rotateInUpLeft', 'rotateInUpRight', 'rotateOut', 'rotateOutDownLeft', 'rotateOutDownRight', 'rotateOutUpLeft', 'rotateOutUpRight', 'lightSpeedIn', 'lightSpeedOut', 'hinge', 'rollIn', 'rollOut'], json_data_url : 'data.json', template_data_url : 'template.php', base_maps_api_url : 'https://maps.googleapis.com/maps/api/js?sensor=false', cluser_wrapper_id : '#content-wrapper', maps_wrapper_class : '.google-maps', }; /* Cluster ~main class, handles all javascript operations */ var Cluster = function(environment, cluster_options) { var self = this; this.options = $.extend({}, default_cluster_options, cluster_options); this.environment = environment; this.animations = this.options.animations; this.json_data_url = this.options.json_data_url; this.template_data_url = this.options.template_data_url; this.base_maps_api_url = this.options.base_maps_api_url; this.cluser_wrapper_id = this.options.cluser_wrapper_id; this.maps_wrapper_class = this.options.maps_wrapper_class; this.test_environment_mode(this.environment); this.initiate_environment(); this.test_xmlhttprequest_availability(); this.initiate_gmaps_lib_load(self.base_maps_api_url); this.initiate_data_processing(); }; /* Test Environment Mode ~adds a modernizr test which looks wheater the cluster class is initiated in development or not */ Cluster.prototype.test_environment_mode = function(environment) { var self = this; return Modernizr.addTest('test_environment', function() { return (typeof environment !== 'undefined' && environment !== null && environment === "Development") ? true : false; }); }; /* Test XMLHTTPRequest Availability ~adds a modernizr test which looks wheater the xmlhttprequest class is available or not in the browser, exception makes IE */ Cluster.prototype.test_xmlhttprequest_availability = function() { return Modernizr.addTest('test_xmlhttprequest', function() { return (typeof window.XMLHttpRequest === 'undefined' || window.XMLHttpRequest === null) ? true : false; }); }; /* Initiate Environment ~depending on what the modernizr test returns it puts LESS in the development mode or not */ Cluster.prototype.initiate_environment = function() { return (Modernizr.test_environment) ? (less.env = "development", less.watch()) : true; }; Cluster.prototype.initiate_gmaps_lib_load = function(lib_url) { return Modernizr.load(lib_url); }; /* Initiate XHR Request ~prototype function that creates an xmlhttprequest for processing json data from an separate json text file */ Cluster.prototype.initiate_xhr_request = function(url, mime_type) { var request, data; var self = this; (Modernizr.test_xmlhttprequest) ? request = new ActiveXObject('Microsoft.XMLHTTP') : request = new XMLHttpRequest(); request.onreadystatechange = function() { if(request.readyState == 4 && request.status == 200) { data = request.responseText; } }; request.open("GET", url, false); request.overrideMimeType(mime_type); request.send(); return data; }; Cluster.prototype.initiate_google_maps_action = function() { var self = this; return $(this.maps_wrapper_class).each(function(index, element) { return $(element).on('click', function(ev) { var html = $('<div id="map-canvas" class="map-canvas"></div>'); var latitude = $(element).attr('data-latitude'); var longitude = $(element).attr('data-longitude'); log("LAT : " + latitude); log("LON : " + longitude); $.lightbox(html, { "width": 900, "height": 250, "onOpen" : function() { } }); ev.preventDefault(); }); }); }; Cluster.prototype.initiate_data_processing = function() { var self = this; var json_data = JSON.parse(self.initiate_xhr_request(self.json_data_url, 'application/json; charset=ISO-8859-1')); var source_data = self.initiate_xhr_request(self.template_data_url, 'text/html'); var template = Handlebars.compile(source_data); for(var i = 0; i < json_data.length; i++ ) { var result = template(json_data[i]); $(result).appendTo(self.cluser_wrapper_id); } self.initiate_google_maps_action(); }; /* Cluster ~initiate the cluster class */ var cluster = new Cluster("Development"); }); My problem would be that I don't think I'm iterating the JSON object right or I'm using the template the wrong way because if you check this link : http://rolandgroza.com/labs/valtech/ ; you will see that there are some numbers there ( which represents latitude and longitude ) but they are all the same and if you take only a brief look at the JSON object each number is different. So what am I doing wrong that it makes the same number repeat ? Or what should I do to fix it ? I must notice that I've just started working with templates so I have little knowledge it.

    Read the article

  • change an absolutely positioned webpage into a centered one

    - by Jonathan
    So I have this template design that is currently absolutely positioned, but I'm trying to make it centered in any widescreen browser. I've tried making the width auto on the left and right side in my container, but it is still aligned with the left side. Css .JosephSettin_png { position: absolute; left:0px; top:0px; width:216px; height:40px; background: url("JosephSettin.png") no-repeat; } .home_png { position: absolute; left:472px; top:16px; width:48px; height:16px; } .discography_png { position: absolute; left:528px; top:16px; width:80px; height:24px; } .purchase_png { position: absolute; left:608px; top:16px; width:88px; height:24px; } .about_png { position: absolute; left:696px; top:16px; width:48px; height:24px; } .contact_png { position: absolute; left:744px; top:16px; width:56px; height:24px; } .main__pic_png { position: absolute; left:0px; top:56px; width:264px; height:264px; background: url("main_pic.png") no-repeat; } .footer__lines_png { position: absolute; left:0px; top:512px; width:800px; height:24px; background: url("footer_lines.png") no-repeat; } .info__heading_png { position: absolute; left:32px; top:360px; width:216px; height:32px; background: url("info_heading.png") no-repeat; } .info__pic3_png { position: absolute; left:265px; top:360px; width:159px; height:112px; background: url("info_pic3.png") no-repeat; } .info__pic2_png { position: absolute; left:432px; top:360px; width:176px; height:112px; background: url("info_pic2.png") no-repeat; } .info__pic1_png { position: absolute; left:616px; top:360px; width:177px; height:112px; background: url("info_pic1.png") no-repeat; } .info__pane_png { position: absolute; left:0px; top:345px; width:800px; height:144px; background: url("info_pane.png") no-repeat; } body { text-align: center; background-color:maroon; } #wrapper { width: 800px; margin-left: auto; margin-right: auto; text-align: left; } #a { text-decoration: none; color:white; font-weight:bold; } .style1 { font-weight: bold; color: #FFFFFF; } html <body> <center> <div id="wrapper"> <div class="JosephSettin_png"> </div> <div class="home_png"> <a href="home.html" style="color:yellow">Home</a></div> <div class="discography_png"> <a href="discography.html">Discography</a></div> <div class="purchase_png"><a href="store.html"><span class="style1">Store</span></a></div> <div class="about_png"><a href="about.html">About</a></div> <div class="contact_png"><a href="contact.html"><span class="style1"></span>Contact</a></div> <div class="ad_png"> </div> <div class="main__pic_png"> </div> <div class="welcome__header_png"> </div> <div class="welcome__text_png"> </div> <div class="footer__lines_png"> </div> <div class="footer__text_png"> </div> <div class="info__pane_png"></div> <div class="info__heading_png"> </div> <div class="info__text_png"> </div> <div class="info__pic3_png"> </div> <div class="info__pic2_png"> </div> <div class="info__pic1_png"> </div> <div class="info__pic3_png"> </div> </div> </center> </body> I know the container I create works if all my div classes aren't absolutely positioned. Do I have to change the position or did I make another error?

    Read the article

  • SINGLE SIGN ON SECURITY THREAT! FACEBOOK access_token broadcast in the open/clear

    - by MOKANA
    Subsequent to my posting there was a remark made that this was not really a question but I thought I did indeed postulate one. So that there is no ambiquity here is the question with a lead in: Since there is no data sent from Facebook during the Canvas Load process that is not at some point divulged, including the access_token, session and other data that could uniquely identify a user, does any one see any other way other than adding one more layer, i.e., a password, sent over the wire via HTTPS along with the access_toekn, that will insure unique untampered with security by the user? Using Wireshark I captured the local broadcast while loading my Canvas Application page. I was hugely surprised to see the access_token broadcast in the open, viewable for any one to see. This access_token is appended to any https call to the Facebook OpenGraph API. Using facebook as a single click log on has now raised huge concerns for me. It is stored in a session object in memory and the cookie is cleared upon app termination and after reviewing the FB.Init calls I saw a lot of HTTPS calls so I assumed the access_token was always encrypted. But last night I saw in the status bar a call from what was simply an http call that included the App ID so I felt I should sniff the Application Canvas load sequence. Today I did sniff the broadcast and in the attached image you can see that there are http calls with the access_token being broadcast in the open and clear for anyone to gain access to. Am I missing something, is what I am seeing and my interpretation really correct. If any one can sniff and get the access_token they can theorically make calls to the Graph API via https, even though the call back would still need to be the site established in Facebook's application set up. But what is truly a security threat is anyone using the access_token for access to their own site. I do not see the value of a single sign on via Facebook if the only thing that was established as secure was the access_token - becuase for what I can see it clearly is not secure. Access tokens that never have an expire date do not change. Access_tokens are different for every user, to access to another site could be held tight to just a single user, but compromising even a single user's data is unacceptable. http://www.creatingstory.com/images/InTheOpen.png Went back and did more research on this: FINDINGS: Went back an re ran the canvas application to verify that it was not any of my code that was not broadcasting. In this call: HTTP GET /connect.php/en_US/js/CacheData HTTP/1.1 The USER ID is clearly visible in the cookie. So USER_ID's are fully visible, but they are already. Anyone can go to pretty much any ones page and hover over the image and see the USER ID. So no big threat. APP_ID are also easily obtainable - but . . . http://www.creatingstory.com/images/InTheOpen2.png The above file clearly shows the FULL ACCESS TOKEN clearly in the OPEN via a Facebook initiated call. Am I wrong. TELL ME I AM WRONG because I want to be wrong about this. I have since reset my app secret so I am showing the real sniff of the Canvas Page being loaded. Additional data 02/20/2011: @ifaour - I appreciate the time you took to compile your response. I am pretty familiar with the OAuth process and have a pretty solid understanding of the signed_request unpacking and utilization of the access_token. I perform a substantial amount of my processing on the server and my Facebook server side flows are all complete and function without any flaw that I know of. The application secret is secure and never passed to the front end application and is also changed regularly. I am being as fanatical about security as I can be, knowing there is so much I don’t know that could come back and bite me. Two huge access_token issues: The issues concern the possible utilization of the access_token from the USER AGENT (browser). During the FB.INIT() process of the Facebook JavaScript SDK, a cookie is created as well as an object in memory called a session object. This object, along with the cookie contain the access_token, session, a secret, and uid and status of the connection. The session object is structured such that is supports both the new OAuth and the legacy flows. With OAuth, the access_token and status are pretty much al that is used in the session object. The first issue is that the access_token is used to make HTTPS calls to the GRAPH API. If you had the access_token, you could do this from any browser: https://graph.facebook.com/220439?access_token=... and it will return a ton of information about the user. So any one with the access token can gain access to a Facebook account. You can also make additional calls to any info the user has granted access to the application tied to the access_token. At first I thought that a call into the GRAPH had to have a Callback to the URL established in the App Setup, but I tested it as mentioned below and it will return info back right into the browser. Adding that callback feature would be a good idea I think, tightens things up a bit. The second issue is utilization of some unique private secured data that identifies the user to the third party data base, i.e., like in my case, I would use a single sign on to populate user information into my database using this unique secured data item (i.e., access_token which contains the APP ID, the USER ID, and a hashed with secret sequence). None of this is a problem on the server side. You get a signed_request, you unpack it with secret, make HTTPS calls, get HTTPS responses back. When a user has information entered via the USER AGENT(browser) that must be stored via a POST, this unique secured data element would be sent via HTTPS such that they are validated prior to data base insertion. However, If there is NO secured piece of unique data that is supplied via the single sign on process, then there is no way to guarantee unauthorized access. The access_token is the one piece of data that is utilized by Facebook to make the HTTPS calls into the GRAPH API. it is considered unique in regards to BOTH the USER and the APPLICATION and is initially secure via the signed_request packaging. If however, it is subsequently transmitted in the clear and if I can sniff the wire and obtain the access_token, then I can pretend to be the application and gain the information they have authorized the application to see. I tried the above example from a Safari and IE browser and it returned all of my information to me in the browser. In conclusion, the access_token is part of the signed_request and that is how the application initially obtains it. After OAuth authentication and authorization, i.e., the USER has logged into Facebook and then runs your app, the access_token is stored as mentioned above and I have sniffed it such that I see it stored in a Cookie that is transmitted over the wire, resulting in there being NO UNIQUE SECURED IDENTIFIABLE piece of information that can be used to support interaction with the database, or in other words, unless there were one more piece of secure data sent along with the access_token to my database, i.e., a password, I would not be able to discern if it is a legitimate call. Luckily I utilized secure AJAX via POST and the call has to come from the same domain, but I am sure there is a way to hijack that. I am totally open to any ideas on this topic on how to uniquely identify my USERS other than adding another layer (password) via this single sign on process or if someone would just share with me that I read and analyzed my data incorrectly and that the access_token is always secure over the wire. Mahalo nui loa in advance.

    Read the article

  • Numpy zero rank array indexing/broadcasting

    - by Lemming
    I'm trying to write a function that supports broadcasting and is fast at the same time. However, numpy's zero-rank arrays are causing trouble as usual. I couldn't find anything useful on google, or by searching here. So, I'm asking you. How should I implement broadcasting efficiently and handle zero-rank arrays at the same time? This whole post became larger than anticipated, sorry. Details: To clarify what I'm talking about I'll give a simple example: Say I want to implement a Heaviside step-function. I.e. a function that acts on the real axis, which is 0 on the negative side, 1 on the positive side, and from case to case either 0, 0.5, or 1 at the point 0. Implementation Masking The most efficient way I found so far is the following. It uses boolean arrays as masks to assign the correct values to the corresponding slots in the output vector. from numpy import * def step_mask(x, limit=+1): """Heaviside step-function. y = 0 if x < 0 y = 1 if x > 0 See below for x == 0. Arguments: x Evaluate the function at these points. limit Which limit at x == 0? limit > 0: y = 1 limit == 0: y = 0.5 limit < 0: y = 0 Return: The values corresponding to x. """ b = broadcast(x, limit) out = zeros(b.shape) out[x>0] = 1 mask = (limit > 0) & (x == 0) out[mask] = 1 mask = (limit == 0) & (x == 0) out[mask] = 0.5 mask = (limit < 0) & (x == 0) out[mask] = 0 return out List Comprehension The following-the-numpy-docs way is to use a list comprehension on the flat iterator of the broadcast object. However, list comprehensions become absolutely unreadable for such complicated functions. def step_comprehension(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) out.flat = [ ( 1 if x_ > 0 else ( 0 if x_ < 0 else ( 1 if l_ > 0 else ( 0.5 if l_ ==0 else ( 0 ))))) for x_, l_ in b ] return out For Loop And finally, the most naive way is a for loop. It's probably the most readable option. However, Python for-loops are anything but fast. And hence, a really bad idea in numerics. def step_for(x, limit=+1): b = broadcast(x, limit) out = empty(b.shape) for i, (x_, l_) in enumerate(b): if x_ > 0: out[i] = 1 elif x_ < 0: out[i] = 0 elif l_ > 0: out[i] = 1 elif l_ < 0: out[i] = 0 else: out[i] = 0.5 return out Test First of all a brief test to see if the output is correct. >>> x = array([-1, -0.1, 0, 0.1, 1]) >>> step_mask(x, +1) array([ 0., 0., 1., 1., 1.]) >>> step_mask(x, 0) array([ 0. , 0. , 0.5, 1. , 1. ]) >>> step_mask(x, -1) array([ 0., 0., 0., 1., 1.]) It is correct, and the other two functions give the same output. Performance How about efficiency? These are the timings: In [45]: xl = linspace(-2, 2, 500001) In [46]: %timeit step_mask(xl) 10 loops, best of 3: 19.5 ms per loop In [47]: %timeit step_comprehension(xl) 1 loops, best of 3: 1.17 s per loop In [48]: %timeit step_for(xl) 1 loops, best of 3: 1.15 s per loop The masked version performs best as expected. However, I'm surprised that the comprehension is on the same level as the for loop. Zero Rank Arrays But, 0-rank arrays pose a problem. Sometimes you want to use a function scalar input. And preferably not have to worry about wrapping all scalars in at least 1-D arrays. >>> step_mask(1) Traceback (most recent call last): File "<ipython-input-50-91c06aa4487b>", line 1, in <module> step_mask(1) File "script.py", line 22, in step_mask out[x>0] = 1 IndexError: 0-d arrays can't be indexed. >>> step_for(1) Traceback (most recent call last): File "<ipython-input-51-4e0de4fcb197>", line 1, in <module> step_for(1) File "script.py", line 55, in step_for out[i] = 1 IndexError: 0-d arrays can't be indexed. >>> step_comprehension(1) array(1.0) Only the list comprehension can handle 0-rank arrays. The other two versions would need special case handling for 0-rank arrays. Numpy gets a bit messy when you want to use the same code for arrays and scalars. However, I really like to have functions that work on as arbitrary input as possible. Who knows which parameters I'll want to iterate over at some point. Question: What is the best way to implement a function as the one above? Is there a way to avoid if scalar then like special cases? I'm not looking for a built-in Heaviside. It's just a simplified example. In my code the above pattern appears in many places to make parameter iteration as simple as possible without littering the client code with for loops or comprehensions. Furthermore, I'm aware of Cython, or weave & Co., or implementation directly in C. However, the performance of the masked version above is sufficient for the moment. And for the moment I would like to keep things as simple as possible.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • IIS 7.5 , Tomcat 7 - Isapi redirector - Fail Over - sticky sessions

    - by Jose Matias
    I have two instances of Tomcat 7.0.8 running in the same machine (Tomcat7A and Tomcat7B) and IIS 7.5 acting as front-end load-balancer with isapi-redirector 1.2.31, running on Windows 2008 R2. When i disconnect the instance wich is handling a request i can see a new instance being assigned with the same sessionid but then the user is redirected to the login page. server.xml configuration file <Engine name="Catalina" defaultHost="localhost" jvmRoute="Tomcat7A"> <Realm className="org.apache.catalina.realm.LockOutRealm"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8"> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.8" bind="7.3.1.22" port="45564" frequency="500" dropTime="3000"/> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto" port="4200" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt"/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> worker_mount_file=C:\tomcat\iis\conf\uriworkermap_prod.properties worker.list = balancer,status worker.Tomcat7B.host = 7.3.1.22 worker.Tomcat7B.type = ajp13 worker.Tomcat7B.port = 8010 worker.Tomcat7B.lbfactor = 10 worker.Tomcat7A.host = 7.3.1.22 worker.Tomcat7A.type = ajp13 worker.Tomcat7A.port = 8009 worker.Tomcat7A.lbfactor = 10 worker.balancer.type = lb worker.balancer.sticky_session = 1 worker.balancer.balance_workers = Tomcat7B, Tomcat7A worker.status.type = status isapi_redirect log [debug] wc_get_worker_for_name::jk_worker.c (116): found a worker balancer [debug] HttpExtensionProc::jk_isapi_plugin.c (2188): got a worker for name balancer [debug] service::jk_lb_worker.c (1118): service sticky_session=1 id='89569C584CC4F58740D649C4BE655D36.Tomcat7B' [debug] get_most_suitable_worker::jk_lb_worker.c (946): searching worker for partial sessionid 89569C584CC4F58740D649C4BE655D36.Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (954): searching worker for session route Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (968): found worker Tomcat7B (Tomcat7B) for route Tomcat7B and partial sessionid 89569C584CC4F58740D649C4BE655D36.Tomcat7B [debug] service::jk_lb_worker.c (1161): service worker=Tomcat7B route=Tomcat7B [debug] ajp_get_endpoint::jk_ajp_common.c (3096): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (605): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2379): processing Tomcat7B with 2 retries [debug] jk_shutdown_socket::jk_connect.c (726): About to shutdown socket 820 [7.3.1.22:24482 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (797): shutting down the read side of socket 820 [7.3.1.22:24482 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (808): Shutdown socket 820 [7.3.1.22:24482 -> 7.3.1.22:8010] and read 0 lingering bytes in 0 sec. [debug] ajp_send_request::jk_ajp_common.c (1496): (Tomcat7B) failed sending request, socket 820 is not connected any more (errno=-10000) [debug] ajp_next_connection::jk_ajp_common.c (823): (Tomcat7B) Will try pooled connection socket 896 from slot 1 [debug] jk_shutdown_socket::jk_connect.c (726): About to shutdown socket 896 [7.3.1.22:24488 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (797): shutting down the read side of socket 896 [7.3.1.22:24488 -> 7.3.1.22:8010] [debug] jk_shutdown_socket::jk_connect.c (808): Shutdown socket 896 [7.3.1.22:24488 -> 7.3.1.22:8010] and read 0 lingering bytes in 0 sec. [debug] ajp_send_request::jk_ajp_common.c (1496): (Tomcat7B) failed sending request, socket 896 is not connected any more (errno=-10000) [info] ajp_send_request::jk_ajp_common.c (1567): (Tomcat7B) all endpoints are disconnected, detected by connect check (2), cping (0), send (0) [debug] jk_open_socket::jk_connect.c (484): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (608): trying to connect socket 896 to 7.3.1.22:8010 [info] jk_open_socket::jk_connect.c (626): connect to 7.3.1.22:8010 failed (errno=61) [info] ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (7.3.1.22:8010) (errno=61) [error] ajp_send_request::jk_ajp_common.c (1578): (Tomcat7B) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [info] ajp_service::jk_ajp_common.c (2543): (Tomcat7B) sending request to tomcat failed (recoverable), because of error during request sending (attempt=1) [debug] ajp_service::jk_ajp_common.c (2400): retry 1, sleeping for 100 ms before retrying [debug] ajp_send_request::jk_ajp_common.c (1572): (Tomcat7B) all endpoints are disconnected. [debug] jk_open_socket::jk_connect.c (484): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (608): trying to connect socket 896 to 7.3.1.22:8010 [info] jk_open_socket::jk_connect.c (626): connect to 7.3.1.22:8010 failed (errno=61) [info] ajp_connect_to_endpoint::jk_ajp_common.c (959): Failed opening socket to (7.3.1.22:8010) (errno=61) [error] ajp_send_request::jk_ajp_common.c (1578): (Tomcat7B) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=61) [info] ajp_service::jk_ajp_common.c (2543): (Tomcat7B) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [error] ajp_service::jk_ajp_common.c (2562): (Tomcat7B) connecting to tomcat failed. [debug] ajp_reset_endpoint::jk_ajp_common.c (757): (Tomcat7B) resetting endpoint with socket -1 (socket shutdown) [debug] ajp_done::jk_ajp_common.c (3013): recycling connection pool slot=0 for worker Tomcat7B [debug] service::jk_lb_worker.c (1374): worker Tomcat7B escalating local error to global error [info] service::jk_lb_worker.c (1388): service failed, worker Tomcat7B is in error state [debug] service::jk_lb_worker.c (1399): recoverable error... will try to recover on other worker [debug] get_most_suitable_worker::jk_lb_worker.c (946): searching worker for partial sessionid 89569C584CC4F58740D649C4BE655D36.Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (954): searching worker for session route Tomcat7B [debug] get_most_suitable_worker::jk_lb_worker.c (1001): found best worker Tomcat7A (Tomcat7A) using method 'Request' [debug] service::jk_lb_worker.c (1161): service worker=Tomcat7A route=Tomcat7B [debug] ajp_get_endpoint::jk_ajp_common.c (3096): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (605): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2379): processing Tomcat7A with 2 retries [debug] ajp_send_request::jk_ajp_common.c (1572): (Tomcat7A) all endpoints are disconnected. [debug] jk_open_socket::jk_connect.c (484): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (608): trying to connect socket 896 to 7.3.1.22:8009 [debug] jk_open_socket::jk_connect.c (634): socket 896 [7.3.1.22:24496 -> 7.3.1.22:8009] connected [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): sending to ajp13 pos=4 len=615 max=8192 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0000 .4.c....HTTP/1.1 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0010 .../Accounter/pr [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0020 intFrameSet.jhtm [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0030 l...::1...::1... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0040 localhost..P.... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0050 ...Keep-Alive... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0060 ..0....rimage/jp [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0070 eg,.image/gif,.i [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0080 mage/pjpeg,.appl [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0090 ication/x-ms-app [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00a0 lication,.applic [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00b0 ation/xaml+xml,. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00c0 application/x-ms [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00d0 -xbap,.*/*...Acc [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00e0 ept-Encoding...g [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00f0 zip,.deflate...A [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0100 ccept-Language.. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0110 .nb-NO....]Usern [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0120 ame=NA_jose.mati [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0130 as_AT_addenergy. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0140 no;.JSESSIONID=8 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0150 9569C584CC4F5874 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0160 0D649C4BE655D36. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0170 Tomcat7B.....loc [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0180 alhost.....http: [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0190 //localhost/Acco [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01a0 unter/NemsAccoun [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01b0 ter.jhtml....uMo [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01c0 zilla/4.0.(compa [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01d0 tible;.MSIE.8.0; [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01e0 .Windows.NT.6.1; [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01f0 .WOW64;.Trident/ [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0200 4.0;.SLCC2;..NET [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0210 .CLR.2.0.50727;. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0220 .NET4.0C;..NET4. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0230 0E)............F [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0240 rameName=Reports [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0250 _CS_EUETS....Tom [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0260 cat7B........... [debug] ajp_send_request::jk_ajp_common.c (1632): (Tomcat7A) request body to send 0 - request body to resend 0 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): received from ajp13 pos=0 len=238 max=8192 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0000 .....Moved.Tempo [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0010 rarily......OJSE [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0020 SSIONID=6A2507A4 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0030 626F698EC74A733C [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0040 DBA7D9FE.Tomcat7 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0050 A;.Path=/Account [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0060 er;.HttpOnly...P [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0070 ragma...no-cache [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0080 ...Cache-Control [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0090 ...no-cache....& [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00a0 http://localhost [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00b0 /Accounter/login [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00c0 .jhtml.....text/ [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00d0 html;charset=ISO [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 00e0 -8859-1.....0... [debug] ajp_unmarshal_response::jk_ajp_common.c (660): status = 302 [debug] ajp_unmarshal_response::jk_ajp_common.c (667): Number of headers is = 6 [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[0] [Set-Cookie] = [JSESSIONID=6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A; Path=/Accounter; HttpOnly] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[1] [Pragma] = [no-cache] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[2] [Cache-Control] = [no-cache] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[3] [Location] = [http://localhost/Accounter/login.jhtml] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[4] [Content-Type] = [text/html;charset=ISO-8859-1] [debug] ajp_unmarshal_response::jk_ajp_common.c (723): Header[5] [Content-Length] = [0] [debug] start_response::jk_isapi_plugin.c (963): Starting response for URI '/Accounter/printFrameSet.jhtml' (protocol HTTP/1.1) [debug] start_response::jk_isapi_plugin.c (1063): Not using Keep-Alive [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): received from ajp13 pos=0 len=2 max=8192 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0000 ................ [debug] ajp_process_callback::jk_ajp_common.c (1943): AJP13 protocol: Reuse is OK [debug] ajp_reset_endpoint::jk_ajp_common.c (757): (Tomcat7A) resetting endpoint with socket 896 [debug] ajp_done::jk_ajp_common.c (3013): recycling connection pool slot=0 for worker Tomcat7A [debug] HttpExtensionProc::jk_isapi_plugin.c (2211): service() returned OK [debug] HttpFilterProc::jk_isapi_plugin.c (1851): Filter started [debug] map_uri_to_worker_ext::jk_uri_worker_map.c (1036): Attempting to map URI '/localhost/Accounter/login.jhtml' from 8 maps [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/servlet/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/ws/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/nems*.pdf=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.service=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.jhtml=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.json=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/jkmanager=status' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/servlet/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/ws/*=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/nems*.pdf=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.service=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (850): Attempting to map context URI '/Accounter/*.jhtml=balancer' source 'uriworkermap' [debug] find_match::jk_uri_worker_map.c (863): Found a wildchar match '/Accounter/*.jhtml=balancer' [debug] HttpFilterProc::jk_isapi_plugin.c (1938): check if [/Accounter/login.jhtml] points to the web-inf directory [debug] HttpFilterProc::jk_isapi_plugin.c (1954): [/Accounter/login.jhtml] is a servlet url - should redirect to balancer [debug] HttpFilterProc::jk_isapi_plugin.c (1994): fowarding escaped URI [/Accounter/login.jhtml] [debug] init_ws_service::jk_isapi_plugin.c (2982): Reading extension header HTTP_TOMCATWORKER0000000180000000: balancer [debug] init_ws_service::jk_isapi_plugin.c (2983): Reading extension header HTTP_TOMCATWORKERIDX0000000180000000: 5 [debug] init_ws_service::jk_isapi_plugin.c (2984): Reading extension header HTTP_TOMCATURI0000000180000000: /Accounter/login.jhtml [debug] init_ws_service::jk_isapi_plugin.c (2985): Reading extension header HTTP_TOMCATQUERY0000000180000000: (null) [debug] init_ws_service::jk_isapi_plugin.c (3040): Applying service extensions [debug] init_ws_service::jk_isapi_plugin.c (3298): Service protocol=HTTP/1.1 method=GET host=::1 addr=::1 name=localhost port=80 auth= user= uri=/Accounter/login.jhtml [debug] init_ws_service::jk_isapi_plugin.c (3310): Service request headers=9 attributes=0 chunked=no content-length=0 available=0 [debug] wc_get_worker_for_name::jk_worker.c (116): found a worker balancer [debug] HttpExtensionProc::jk_isapi_plugin.c (2188): got a worker for name balancer [debug] service::jk_lb_worker.c (1118): service sticky_session=1 id='6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A' [debug] get_most_suitable_worker::jk_lb_worker.c (946): searching worker for partial sessionid 6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A [debug] get_most_suitable_worker::jk_lb_worker.c (954): searching worker for session route Tomcat7A [debug] get_most_suitable_worker::jk_lb_worker.c (968): found worker Tomcat7A (Tomcat7A) for route Tomcat7A and partial sessionid 6A2507A4626F698EC74A733CDBA7D9FE.Tomcat7A [debug] service::jk_lb_worker.c (1161): service worker=Tomcat7A route=Tomcat7A [debug] ajp_get_endpoint::jk_ajp_common.c (3096): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (605): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2379): processing Tomcat7A with 2 retries [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): sending to ajp13 pos=4 len=577 max=8192 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0000 .4.=....HTTP/1.1 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0010 .../Accounter/lo [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0020 gin.jhtml...::1. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0030 ..::1...localhos [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0040 t..P.......Keep- [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0050 Alive.....0....r [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0060 image/jpeg,.imag [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0070 e/gif,.image/pjp [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0080 eg,.application/ [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0090 x-ms-application [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00a0 ,.application/xa [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00b0 ml+xml,.applicat [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00c0 ion/x-ms-xbap,.* [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00d0 /*...Accept-Enco [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00e0 ding...gzip,.def [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 00f0 late...Accept-La [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0100 nguage...nb-NO.. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0110 ..]Username=NA_j [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0120 ose.matias_AT_ad [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0130 denergy.no;.JSES [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0140 SIONID=6A2507A46 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0150 26F698EC74A733CD [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0160 BA7D9FE.Tomcat7A [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0170 .....localhost.. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0180 ...http://localh [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0190 ost/Accounter/Ne [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01a0 msAccounter.jhtm [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01b0 l....uMozilla/4. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01c0 0.(compatible;.M [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01d0 SIE.8.0;.Windows [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01f0 Trident/4.0;.SLC [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 01e0 .NT.6.1;.WOW64;. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0200 C2;..NET.CLR.2.0 [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0210 .50727;..NET4.0C [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0220 ;..NET4.0E)..... [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0230 .......Tomcat7A. [debug] ajp_connection_tcp_send_message::jk_ajp_common.c (1145): 0240 ................ [debug] ajp_send_request::jk_ajp_common.c (1621): (Tomcat7A) Statistics about invalid connections: connect check (0), cping (0), send (0) [debug] ajp_send_request::jk_ajp_common.c (1632): (Tomcat7A) request body to send 0 - request body to resend 0 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): received from ajp13 pos=0 len=135 max=8192 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0000 .....OK.....Prag [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0010 ma...no-cache... [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0020 Expires...Thu,.0 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0030 1.Jan.1970.00:00 [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0040 :00.GMT...Cache- [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0050 Control...no-cac [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0060 he...Cache-Contr [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0070 ol...no-store... [debug] ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 0080 ..2995..........

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >