Search Results

Search found 785 results on 32 pages for 'warren van rooyen'.

Page 30/32 | < Previous Page | 26 27 28 29 30 31 32  | Next Page >

  • C# PredicateBuilder Entities: The parameter 'f' was not bound in the specified LINQ to Entities quer

    - by Neothor
    I needed to build a dynamic filter and I wanted to keep using entities. Because of this reason I wanted to use the PredicateBuilder from albahari. I created the following code: var invoerDatums = PredicateBuilder.True<OnderzoeksVragen>(); var inner = PredicateBuilder.False<OnderzoeksVragen>(); foreach (var filter in set.RapportInvoerFilter.ToList()) { if(filter.IsDate) { var date = DateTime.Parse(filter.Waarde); invoerDatums = invoerDatums.Or(o => o.Van >= date && o.Tot <= date); } else { string temp = filter.Waarde; inner = inner.Or(o => o.OnderzoekType == temp); } } invoerDatums = invoerDatums.And(inner); var onderzoeksVragen = entities.OnderzoeksVragen .AsExpandable() .Where(invoerDatums) .ToList(); When I ran the code there was only 1 filter which wasn't a date filter. So only the inner predicate was filled. When the predicate was executed I got the following error. The parameter 'f' was not bound in the specified LINQ to Entities query expression. While searching for an answer I found the following page. But this is already implemented in the LINQKit. Does anyone else experienced this error and know how to solve it?

    Read the article

  • Time flies like an arrow demo in WinForms

    - by Benjol
    Looking at the Reactive Extensions for javascript demo on Jeff Van Gogh's blog, I thought I'd give it a try in C#/Winforms, but it doesn't seem to work so well. I just threw this into the constructor of a form (with the Rx framework installed and referenced): Observable.Context = SynchronizationContext.Current; var mousemove = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var message = "Time flies like an arrow".ToCharArray(); for(int i = 0; i < message.Length; i++) { var l = new Label() { Text = message[i].ToString(), AutoSize = true, TextAlign = ContentAlignment.MiddleCenter }; int closure = i; mousemove .Delay(closure * 150) .Subscribe(e => { l.Left = e.EventArgs.X + closure * 15 + 10; l.Top = e.EventArgs.Y; //Debug.WriteLine(l.Text); }); Controls.Add(l); } When I move the mouse, the letters seem to get moved in a random order, and if I uncomment the Debug line, I see multiple events for the same letter... Any ideas? I've tried Throttle, but it doesn't seem to make any difference. Am I just asking too much of WinForms to move all those labels around? (Cross posted on Rx Forum)

    Read the article

  • Java complex validation in Dropwizard?

    - by miku
    I'd like to accept JSON on an REST endpoint and convert it to the correct type for immediate validation. The endpoint looks like this: @POST public Response createCar(@Valid Car car) { // persist to DB // ... } But there are many subclasses of Car, e.g. Van, SelfDrivingCar, RaceCar, etc. How could I accept the different JSON representations on the endpoint, while keeping the validation code in the Resource as concise as something like @Valid Car car? Again: I send in JSON like (here, it's the representation of a subclass of Car, namely SelfDrivingCar): { "id" : "t1", // every Car has an Id "kind" : "selfdriving", // every Car has a type-hint "max_speed" : "200 mph", // some attribute "ai_provider" : "fastcarsai ltd." // this is SelfDrivingCar-specific } and I'd like the validation machinery look into the kind attribute, create an instance of the appropriate subclass, here e.g. SelfDrivingCar and perform validation. I know I could create different endpoints for all kind of cars, but thats does not seem DRY. And I know that I could use a real Validator instead of the annotation and do it by hand, so I'm just asking if there's some elegant shortcut for this problem.

    Read the article

  • To have an Integer pointing to 3 ordered lists in Java

    - by Masi
    Which datastructure would you use in the place of X to have efficient merges, sorts and additions as described below? #1 Possible solution: one HashMap to X -datastructure Having a HashMap pointing from fileID to some datastructure linking word, wordCount and wordID may be a good solution. However, I have not found a way to implement it. I am not allowed to use Postgres or any similar tool to keep my data neutralized. I want to have efficient merges, sorts and additions according to fileID, wordID or wordCount for the type below. I have the type Words which has th field fileID that points to a list of words and to relating pieces of information: The Type class Words =================================== fileID: int [list of words] : ArrayList [list of wordCounts] : ArrayList [list of wordIDs] : ArrayList Example of the data in fileID word wordCount wordID instance1 of words 1 He 123 1111 1 llo 321 2 instance2 of words 2 Van 213 666 2 cou 777 932 Example of needed merge fileID wordID fileID wordID 1 2 1 3 wordID=2 2 2 ========> 1 2 2 3 2 2 I cannot see any usage of set-operations such as intersections here because order is needed. Having about three HashMaps makes sorting difficult: from word to wordID in a given fileID from wordID to fileID from wordID to wordCount in a given fileID

    Read the article

  • PHP Form Sending Information to Limbo!

    - by drew
    I was told my client's quote form has not been generating very many emails. I have learned that although the form brings you to a confirmation page, the information never reaches the recipient. I have altered the code so it goes to my office email for testing purposes. If I post code for the form elements below, would someone be able to spot what the problem might be? Thank you very much! Link to the quote page is http://autoglass-plus.com/quote.php First is the form itself: <form id="quoteForm" name="form" action="form/index.php" method="post"> <fieldset> <p> <strong>Contact Information:</strong><br /> </p> <div> <label for="firstname">First Name:<br /> </label> <input type="text" size="30" name="firstname" class="txt" id="firstname" /> </div> <div> <label for="lastname">Last Name:<br /> </label> <input type="text" size="30" name="lastname" class="txt" id="lastname" /> </div> <div> <label for="address">Address:<br /> </label> <input type="text" size="30" name="address" class="txt" id="address" /> </div> <div> <label for="city">City:<br /> </label> <input type="text" size="30" name="city" class="txt" id="city" /> </div> <div> <label for="state">State:<br /> </label> <input type="text" size="30" name="state" class="txt" id="state" /> </div> <div> <label for="zip">Zip:<br /> </label> <input type="text" size="30" name="zip" class="txt" id="zip" /> </div> <div> <label for="label">Phone:<br /> </label> <input type="text" size="30" name="phone" class="txt" id="label" /> </div> <div> <label for="email">Email:<br /> </label> <input type="text" size="30" name="email" class="txt" id="email" /> </div> <p><br /> <b>Insurace Information</b></p> <p><i>Auto Glass Plus in an Approved Insurance Vendor. Insurance claims require additional information that we will request when we contact you for your quote.</i></p> <br /> <div> <input type="checkbox" name="insurance" value="yes" /> Check here if this is an insurance claim.<br /> <label for="year">Insurance Provider:<br /> </label> <input type="text" size="30" name="provider" class="txt" id="provider" /> </div> <p><br /> <b>Vehicle Information:</b><br /> </p> <div> <label for="year">Vehicle Year :<br /> </label> <input type="text" size="30" name="year" class="txt" id="year" /> </div> <div> <label for="make">Make: </label> <br /> <input type="text" size="30" name="make" class="txt" id="make" /> </div> <div> <label for="model">Model:</label> <br /> <input type="text" size="30" name="model" class="txt" id="model" /> </div> <div> <label for="body">Body Type:<br /> </label> <select name="body" id="body"> <option>Select One</option> <option value="2 Door Hatchback">2 Door Hatchback</option> <option value="4 Door Hatchback">4 Door Hatchback</option> <option value="2 Door Sedan">2 Door Sedan</option> <option value="4 Door Sedan">4 Door Sedan</option> <option value="Station Wagon">Station Wagon</option> <option value="Van">Van</option> <option value="Sport Utility">Sport Utility</option> <option value="Pickup Truck">Pickup Truck</option> <option value="Other Truck">Other Truck</option> <option value="Recreational Vehicle">Recreational Vehicle</option> <option value="Other">Other</option> </select> </div> <p><b><br /> Glass in Need of Repair:</b><br /> </p> <div> <input type="checkbox" name="repairs" value="Windshield" /> Windshield<br /> <input type="checkbox" name="repairs" value="Back Glass" /> Back Glass<br /> <input type="checkbox" name="repairs" value="Driver&rsquo;s Side Window" /> Side Window*<br /> <input type="checkbox" name="repairs" value="Chip Repair" /> Chip Repair<br /> <input type="checkbox" name="repairs" value="Other" /> Other </div> <p><strong>*Important:</strong> For side glass, please indicate the specific window that needs replacement <i>(e.g. passenger side rear door or driver side vent glass)</i>, and any tinting color preference in the <strong>Describe Damage </strong> field.</p> <p><br /> <b>Describe Damage</b></p> <div> <textarea rows="6" name="damage" id="damage" cols="37" class="txt"></textarea> </div> <input type="hidden" name="thanks" value="../thanks.php" /> <input type="hidden" name="required_fields" value="firstname, lastname, email, phone" /> <input type="hidden" name="html_template" value="testform.tpl.html" /> <input type="hidden" name="mail_template" value="testmail.tpl.txt" /> <div class="submit"> <center> <input type="submit" value="Submit Form" name="Submit" id="Submit" /> </center> </div> </fieldset> </form> Then it sends to a file named index.php inside the "form" folder: <?php $script_root = './'; $referring_server = 'www.wmsgroup.com, wmsgroup.com, scripts'; $allow_empty_referer = 'yes'; // (yes, no) $language = 'en'; // (see folder 'languages') $ip_banlist = ''; $ip_address_count = '0'; $ip_address_duration = '48'; $show_limit_errors = 'yes'; // (yes, no) $log_messages = 'no'; // (yes, no) $text_wrap = '65'; $show_error_messages = 'yes'; $attachment = 'no'; // (yes, no) $attachment_files = 'jpg, gif,png, zip, txt, pdf, doc, ppt, tif, bmp, mdb, xls, txt'; $attachment_size = 100000; $path['logfile'] = $script_root . 'logfile/logfile.txt'; $path['upload'] = $script_root . 'upload/'; // chmod 777 upload $path['templates'] = $script_root . 'templates/'; $file['default_html'] = 'testform.tpl.html'; $file['default_mail'] = 'testmail.tpl.txt'; /***************************************************** ** Add further words, text, variables and stuff ** that you want to appear in the templates here. ** The values are displayed in the HTML output and ** the e-mail. *****************************************************/ $add_text = array( 'txt_additional' => 'Additional', // {txt_additional} 'txt_more' => 'More' // {txt_more} ); /***************************************************** ** Do not edit below this line - Ende der Einstellungen *****************************************************/ /***************************************************** ** Send safety signal to included files *****************************************************/ define('IN_SCRIPT', 'true'); /***************************************************** ** Load formmail script code *****************************************************/ include($script_root . 'inc/formmail.inc.php') ?> There is also formail.inc.php, testform.tpl.php, testform.tpl.text and then the confirmation page, thanks.php I want to know how these all work together and what the problem could be.

    Read the article

  • A way to edit content by altering one file?

    - by Chris
    Hi, I have a contact css tab on my left side on my website, I have more then 30 pages and I don't wantto manually alter all those pages later when data had changed. Does anyone knows a sollution so I only have to alter 1 file to have all pages edited? Perhaps in javascript? The code below is for the tab <div class="slide-out-div"> <a class="handle" href="http://link-for-non-js-users">Content</a> <h3>Onze contact gegevens</h3> <p>Adres: van Ostadestraat 55<br /> Postcode: 8932 JZ<br /> Plaats: Leeuwarden<br /> Tel: 058 844 66 28<br /> Mob: 0629594595 <br /> E-mail: <a href="mailto:[email protected]">[email protected]</a><br /><br /> </p> <p>Mocht u vragen hebben dan kunt u gerust bij ons terecht voor meer informatie.</p>

    Read the article

  • Writing a report

    - by wvd
    Hello all, Since some time I've been investigating more time into profiling things better, really think about how to do a thing and why. Now I'm going to start a new project, where I will be writing a report about. The report will be about anything what I wrote in the project, why, and I'll be investigating some things and do particular research about them. I've seen some reports, such as game programming in Haskell using FRP. However, after reading several reports they all seem to be build different. I have a few questions about writing a report: 1] What are the things I really should include, and what are the things I really shouldn't include? 2] Is it useful to include graphs about different methods/approaches to a several problem, where you only included one into your project, to show WHY you didn't include the other methods. Or should I just explain the method/approach used into the project. 3] Should I only be writing the report after I've completed the project, or should I also write pages about what I expect, how I'm going to build the software? Thanks, William van Doorn

    Read the article

  • PASS: Election Changes for 2011

    - by Bill Graziano
    Last year after the election, the PASS Board created an Election Review Committee.  This group was charged with reviewing our election procedures and making suggestions to improve the process.  You can read about the formation of the group and review some of the intermediate work on the site – especially in the forums. I was one of the members of the group along with Joe Webb (Chair), Lori Edwards, Brian Kelley, Wendy Pastrick, Andy Warren and Allen White.  This group worked from October to April on our election process.  Along the way we: Interviewed interested parties including former NomCom members, Board candidates and anyone else that came forward. Held a session at the Summit to allow interested parties to discuss the issues Had numerous conference calls and worked through the various topics I can’t thank these people enough for the work they did.  They invested a tremendous number of hours thinking, talking and writing about our elections.  I’m proud to say I was a member of this group and thoroughly enjoyed working with everyone (even if I did finally get tired of all the calls.) The ERC delivered their recommendations to the PASS Board prior to our May Board meeting.  We reviewed those and made a few modifications.  I took their recommendations and rewrote them as procedures while incorporating those changes.  Their original recommendations as well as our final document are posted at the ERC documents page.  Please take a second and read them BEFORE we start the elections.  If you have any questions please post them in the forums on the ERC site. (My final document includes a change log at the end that I decided to leave in.  If you want to know which areas to pay special attention to that’s a good start.) Many of those recommendations were already posted in the forums or in the blogs of individual ERC members.  Hopefully nothing in the ERC document is too surprising. In this post I’m going to walk through some of the key changes and talk about what I remember from both ERC and Board discussions.  I’ll pay a little extra attention to things the Board changed from the ERC.  I’d also encourage any of the Board or ERC members to blog their thoughts on this. The Nominating Committee will continue to exist.  Personally, I was curious to see what the non-Board ERC members would think about the NomCom.  There was broad agreement that a group to vet candidates had value to the organization. The NomCom will be composed of five members.  Two will be Board members and three will be from the membership at large.  The only requirement for the three community members is that you’ve volunteered in some way (and volunteering is defined very broadly).  We expect potential at-large NomCom members to participate in a forum on the PASS site to answer questions from the other PASS members. We’re going to hold an election to determine the three community members.  It will be closer to voting for Summit sessions than voting for Board members.  That means there won’t be multiple dedicated emails.  If you’re at all paying attention it will be easy to participate.  Personally I wanted it easy for those that cared to participate but not overwhelm those that didn’t care.  I think this strikes a good balance. There’s also a clause that in order to be considered a winner in this NomCom election, you must receive 10 votes.  This is something I suggested.  I have no idea how popular the NomCom election is going to be.  I just wanted a fallback that if no one participated and some random person got in with one or two votes.  Any open slots will be filled by the NomCom chair (usually the PASS Immediate Past President).  My assumption is that they would probably take the next highest vote getters unless they were throwing flames in the forums or clearly unqualified.  As a final check, the Board still approves the final NomCom. The NomCom is going to rank candidates instead of rating them.  This has interesting implications.  This was championed by another ERC member and I’m hoping they write something about it.  This will really force the NomCom to make decisions between candidates.  You can’t just rate everyone a 3 and be done with it.  It may also make candidates appear further apart than they actually are.  I’m looking forward talking with the NomCom after this election and getting their feedback on this. The PASS Board added an option to remove a candidate with a unanimous vote of the NomCom.  This was primarily put in place to handle people that lied on their application or had a criminal background or some other unusual situation and we figured it out. We list an explicit goal of three candidate per open slot. We also wanted an easy way to find the NomCom candidate rankings from the ballot.  Hopefully this will satisfy those that want a broad candidate pool and those that want the NomCom to identify the most qualified candidates. The primary spokesperson for the NomCom is the committee chair.  After the issues around the election last year we didn’t have a good communication plan in place.  We should have and that was a failure on the part of the Board.  If there is criticism of the election this year I hope that falls squarely on the Board.  The community members of the NomCom shouldn’t be fielding complaints over the election process.  That said, the NomCom is ranking candidates and we are forcing them to rank some lower than others.  I’m sure you’ll each find someone that you think should have been ranked differently.  I also want to highlight one other change to the process that we started last year and isn’t included in these documents.  I think the candidate forums on the PASS site were tremendously helpful last year in helping people to find out more about candidates.  That gives our members a way to ask hard questions of the candidates and publicly see their answers. This year we have two important groups to fill.  The first is the NomCom.  We need three people from our membership to step up and fill this role.  It won’t be easy.  You will have to make subjective rankings of your fellow community members.  Your actions will be important in deciding who the future leaders of PASS will be.  There’s a 50/50 chance that one of the people you interview will be the President of PASS someday.  This is not a responsibility to be taken lightly. The second is the slate of candidates.  If you’ve ever thought about running for the Board this is the year.  We’ve never had nine candidates on the ballot before.  Your chance of making it through the NomCom are higher than in any previous year.  Unfortunately the more of you that run, the more of you that will lose in the election.  And hopefully that competition will mean more community involvement and better Board members for PASS. Is this the end of changes to the election process?  It isn’t.  Every year that I’ve been on the Board the election process has changed.  Some years there have been small changes and some years there have been large changes.  After this election we’ll look at how the process worked and decide what steps to take – just like we do every year.

    Read the article

  • PASS: SQLRally Thoughts

    - by Bill Graziano
    The PASS Board recently decided that we wouldn’t put another US-based SQLRally on the calendar until we had a chance to review the program. I wanted to provide some of my thinking around this. Keep in mind that this is the opinion of one Board member. The Board committed to complete two SQLRally events to determine if an event modeled between SQL Saturday and the Summit was viable. We’ve completed the two events and now it’s time to step back and review the program. This is my seventh year on the PASS Board. Over that time people have asked me why PASS does certain things. Many, many times my answer has been “Because that’s the way we did it last year”. And I am tired of giving that answer. We need to take a step back and review the US-based SQLRally before we schedule another one. It would be irresponsible for me as a Board member to commit resources to this without validating that what we’re doing makes sense for the organization and our members. I have no doubt that this was a great event for the attendees. We just need to validate it’s the best use of our resources. Please keep in mind that we haven’t cancelled the event. We’ve just said we need to review it before scheduling another one. My opinion is that some fairly serious changes are needed to the model before we consider it again – IF we do it again. I’ve come to that conclusion after speaking with the Dallas organizers, our HQ team, our Marketing team, other Board members (including one of the Orlando organizers), attendees in Orlando and Dallas and visiting other similar events. I should point out that their views aren’t unanimous on nearly any part of this event -- which is one of the reasons I want to take some time and think about this before continuing. I think it’s helpful to look at the original goals of what we were trying to accomplish. Andy Warren wrote these up in August of 2010. My summary of these goals and some thoughts on each one is below. Many of these thoughts revolve around the growth of SQL Saturdays. In the two years since that document was written these events have grown significantly. The largest SQL Saturdays are now over 500 people which mean they are nearly the same size as our recent SQLRally. Our goals included: Geographic diversity. We wanted an event in an area of the country that was away from any given Summit location. I think that’s still a valid goal. But we also have SQL Saturdays all over the country. What does SQLRally bring to this that SQLSaturday doesn’t? Speaker growth. One of the stated goals was to build a “farm club” for speakers. This gives us a way for speakers to work up to speaking at Summit by speaking in front of larger crowds. What does SQLRally bring to this that the larger SQL Saturdays aren’t providing? Pre-Conference speakers is one obvious answer here. Lower price. On a per-day basis, SQLRally is roughly 1/4th the price of the Summit. We wanted a way for people to experience something Summit-like at a lower price point. The challenge is that we are very budget constrained at that lower price point. International Event Model.  (I need to write more about this but I’m out of time.  I’ll cover it in the next installment.) There are a number of things I really like about SQLRally. I love the smaller conferences. They give me a chance to meet more people than at something the size of Summit. I like the two day format. That gives you two evenings to be at social events with people. Seeing someone a second day is a great way to build a bond with that person. That’s more difficult to do at a SQL Saturday. We also need to talk about the financial aspects of the event. Last year generated a small $17,000 profit on revenues of $200,000. Percentage-wise that’s reasonable but on an absolute basis it’s not a huge amount in our budget. We think this year will lose between $30,000 and $50,000 and take roughly 1,000 hours of HQ time. We don’t have detailed financials back yet but that’s our best guess at this point. Part of that was driven by using a convention center instead of a hotel. Until we get detailed financials back we won’t have the full picture around the financial impact. This event also takes time and mindshare from our Marketing team. This may sound like a small thing but please don’t underestimate it. Our original vision for this was something that would take very little time from our Marketing team and just a few mentions in the Connector. It turned out to need more than that. And all those mentions and emails take up space we could use to talk about other events and other programs. Last I wanted to talk about some of the things I’m thinking about. I don’t think it’s as simple as saying if we just fix “X” it all gets better. Is this that much better of an event than SQL Saturdays? What if we gave a few SQL Saturdays some extra resources? When SQL Saturdays were around 250 people that wasn’t as viable. With some of those events over 500 we need to reconsider this. We need to get back to a hotel venue. That will help with cost and networking. Is this the best use of the 1,000 HQ hours that we invested in the event? Is our price-point correct? I’m leaning toward raising our price closer to Summit on a per-day basis. I think this will let us put on a higher quality event and alleviate much of the budget pressure. Should growing speakers be a focus? Having top-line pre-conference speakers helps market the event. It will also have an impact on pricing and overall profit. We should also ask if it actually does grow speakers. How many of these people will eventually register for Summit? Attend chapters? Is SQLRally a driver into PASS or is it something that chapters, etc. drive people to? Should we have one paid day and one free instead of two paid days? This is a very interesting model that is used by SQLBits in the UK. This gives you the two day aspect as well as offering options for paid and free attendees. I’m very intrigued by this. Should we focus on a topic? Buried in the minutes is a discussion of whether PASS should have a Business Analytics conference separate from Summit. This is an interesting question to consider. Would making SQLRally be focused on a particular topic make it more attractive? Would that even be a SQLRally? Can PASS effectively manage the two events? (FYI - Probably not.) Would it help differentiate it from Summit and SQL Saturday? These are all questions that I think should be asked and answered before we do this event again. And we can’t do that if we don’t take time to have the discussion. I wanted to get this published before I take off for a few days of vacation. When I get back I’d like to write more about why the international events are different and talk about where we go from here.

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • It&rsquo;s A Team Sport: PASS Board Year 2, Q3

    - by Denise McInerney
    As I type this I’m on an airplane en route to my 12th PASS Summit. It’s been a very busy 3.5 months since my last post on my work as a Board member. Nearing the end of my 2-year term I am struck by how much has happened, and yet how fast the time has gone. But I’ll save the retrospective post for next time and today focus on what happened in Q3. In the last three months we made progress on several fronts, thanks to the contributions of many volunteers and HQ staff members. They deserve our appreciation for their dedication to delivering for the membership week after week. Virtual Chapters The Virtual Chapters continue to provide many PASS members with valuable free training. Between July and September of 2013 VCs hosted over 50 webinars with a total of 4300 attendees. This quarter also saw the launch of the Security & Global Russian VCs. Both are off to a strong start and I welcome these additions to the Virtual Chapter portfolio. At the beginning of 2012 we had 14 Virtual Chapters. Today we have 22. This growth has been exciting to see. It has also created a need to have more volunteers help manage the work of the VCs year-round. We have renewed focus on having Virtual Chapter Mentors work with the VC Leaders and other volunteers. I am grateful to volunteers Julie Koesmarno, Thomas LeBlanc and Marcus Bittencourt who join original VC Mentor Steve Simon on this team. Thank you for stepping up to help. Many improvements to the VC web sites have been rolling out over the past few weeks. Our marketing and IT teams have been busy working a new look-and-feel, features and a logo for each VC. They have given the VCs a fresh, professional look consistent with the rest of the PASS branding, and all VCs now have a logo that connects to PASS and the particular focus of the chapter. 24 Hours of PASS The Summit Preview edition  of 24HOP was held on July 31 and by all accounts was a success. Our first use of the GoToWebinar platform for this event went extremely well. Thanks to our speakers, moderators and sponsors for making this event possible. Special thanks to HQ staffers Vicki Van Damme and Jane Duffy for a smoothly run event. Coming up: the 24HOP Portuguese Edition will be held November 13-14, followed December 12-13 by the Spanish Edition. Thanks to the Portuguese- and Spanish-speaking community volunteers who are organizing these events. July Board Meeting The Board met July 18-19 in Kansas City. The first order of business was the election of the Executive Committee who will take office January 1. I was elected Vice President of Marketing and will join incoming President Thomas LaRock, incoming Executive Vice President of Finance Adam Jorgensen and Immediate Past President Bill Graziano on the Exec Co. I am honored that my fellow Board members elected me to this position and look forward to serving the organization in this role. Visit to PASS HQ In late September I traveled to Vancouver for my first visit to PASS HQ, where I joined Tom LaRock and Adam Jorgensen to make plans for 2014.  Our visit was just a few weeks before PASS Summit and coincided with the Board election, and the office was humming with activity. I saw first-hand the enthusiasm and dedication of everyone there. In each interaction I observed a focus on what is best for PASS and our members. Our partners at HQ are key to the organization’s success. This week at PASS Summit is a great opportunity for all of us to remember that, and say “thanks.” Next Up PASS Summit—of course! I’ll be around all week and look forward to connecting with many of our member over meals, at the Community Zone and between sessions. In the evenings you can find me at the Welcome Reception, Exhibitor’s Reception and Community Appreciation Party. And I will be at the Board Q&A session  Friday at 12:45 p.m. Transitions The newly elected Exec Co and Board members take office January 1, and the Virtual Chapter portfolio is transitioning to a new director. I’m thrilled that Jen Stirrup will be taking over. Jen has experience as a volunteer and co-leader of the Business Intelligence Virtual Chapter and was a key contributor to the BI VCs expansion to serving our members in the EMEA region. I’ll be working closely with Jen over the next couple of months to ensure a smooth transition.

    Read the article

  • An Alphabet of Eponymous Aphorisms, Programming Paradigms, Software Sayings, Annoying Alliteration

    - by Brian Schroer
    Malcolm Anderson blogged about “Einstein’s Razor” yesterday, which reminded me of my favorite software development “law”, the name of which I can never remember. It took much Wikipedia-ing to find it (Hofstadter’s Law – see below), but along the way I compiled the following list: Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. Brook’s Law: Adding manpower to a late software project makes it later. Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic. Law of Demeter: Each unit should only talk to its friends; don't talk to strangers. Einstein’s Razor: “Make things as simple as possible, but not simpler” is the popular paraphrase, but what he actually said was “It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”, an overly complicated quote which is an obvious violation of Einstein’s Razor. (You can tell by looking at a picture of Einstein that the dude was hardly an expert on razors or other grooming apparati.) Finagle's Law of Dynamic Negatives: Anything that can go wrong, will—at the worst possible moment. - O'Toole's Corollary: The perversity of the Universe tends towards a maximum. Greenspun's Tenth Rule: Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. (Morris’s Corollary: “…including Common Lisp”) Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law. Issawi’s Omelet Analogy: One cannot make an omelet without breaking eggs - but it is amazing how many eggs one can break without making a decent omelet. Jackson’s Rules of Optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. Kaner’s Caveat: A program which perfectly meets a lousy specification is a lousy program. Liskov Substitution Principle (paraphrased): Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it Mason’s Maxim: Since human beings themselves are not fully debugged yet, there will be bugs in your code no matter what you do. Nils-Peter Nelson’s Nil I/O Rule: The fastest I/O is no I/O.    Occam's Razor: The simplest explanation is usually the correct one. Parkinson’s Law: Work expands so as to fill the time available for its completion. Quentin Tarantino’s Pie Principle: “…you want to go home have a drink and go and eat pie and talk about it.” (OK, he was talking about movies, not software, but I couldn’t find a “Q” quote about software. And wouldn’t it be cool to write a program so great that the users want to eat pie and talk about it?) Raymond’s Rule: Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.  Sowa's Law of Standards: Whenever a major organization develops a new system as an official standard for X, the primary result is the widespread adoption of some simpler system as a de facto standard for X. Turing’s Tenet: We shall do a much better programming job, provided we approach the task with a full appreciation of its tremendous difficulty, provided that we respect the intrinsic limitations of the human mind and approach the task as very humble programmers.  Udi Dahan’s Race Condition Rule: If you think you have a race condition, you don’t understand the domain well enough. These rules didn’t exist in the age of paper, there is no reason for them to exist in the age of computers. When you have race conditions, go back to the business and find out actual rules. Van Vleck’s Kvetching: We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply. Wheeler’s Law: All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection. Wheeler also said “Compatibility means deliberately repeating other people's mistakes.”. The Wrong Road Rule of Mr. X (anonymous): No matter how far down the wrong road you've gone, turn back. Yourdon’s Rule of Two Feet: If you think your management doesn't know what it's doing or that your organisation turns out low-quality software crap that embarrasses you, then leave. Zawinski's Law of Software Envelopment: Every program attempts to expand until it can read mail. Zawinski is also responsible for “Some people, when confronted with a problem, think 'I know, I'll use regular expressions.' Now they have two problems.” He once commented about X Windows widget toolkits: “Using these toolkits is like trying to make a bookshelf out of mashed potatoes.”

    Read the article

  • Stir Trek: Iron Man Edition Recap and Photos

    - by Brian Jackett
    If you’ve noticed my blogging activity has reduced in frequency and technical content lately it’s primarily due to all of the conferences I’ve been attending, speaking at, or planning in the past few months.  This past Friday myself and six other dedicated individuals put on Stir Trek: Iron Man Edition as the culmination of a few months of hard work.  For those unfamiliar, Stir Trek is a web developer conference that was founded last year as an event to showcase content from Microsoft’s MIX conference and end the day with a private showing of the then just-released Star Trek movie.  This year’s conference expanded from 2 to 4 content tracks and upped the number of tickets from 350 to 600.  Even more amazing was the fact that we had 592 people show up day of the event for the lowest drop-off percentage of any conference I’ve been to before.   Nerd Dinner and Swag Bags     The night before Stir Trek: Iron Man Edition we hosted a nerd dinner at the Polaris Shopping mall food court with about 30 in attendance.  Nerd dinners are a great time to meet others passionate about technology and socialize before the whirlwind of the conference hits.  After the nerd dinner 20+ volunteers headed to the conference location and helped us stuff swag bags.  This in and of itself was a monumental task of putting together 600 swag bags with numerous leaflets, sponsor items, and t-shirts.  A big thanks goes out to all who assisted us that night so that we could finish in just under 2 hours instead of taking all night.  My sleep schedule also thanks you. Morning of Stir Trek     After getting a decent amount of sleep I arrived at Marcus Crosswoods theater at 6am to begin setting up for the day.  Myself and Jody Morgan were in charge of registration so we got tables set up, laid out swag bags, and organized our volunteer crew to assist with checking-in attendees.  Despite having 600+ people registration went fairly smoothly and got the day off to a great start.  I especially appreciated the 3+ cups of coffee from Crimson Cup, a local coffee shop.  For any of you that know me you’ll know that I rarely drink coffee except a few times a year when I really need the energy, so that says a lot about how good their coffee is.   Conference Starts     Once registration was completed the day kicked off with Molly Holzschlag keynoting.  Unfortunately Molly suffered from an ear infection and wasn’t able to fly so she had a virtual keynote and a session later in the day.  I was working behind the scenes on various tasks so I was only able to drop in very briefly on the keynote and rest of the morning sessions.  Throughout the day I tried to grab at least 1 or 2 pics of each presenter.  See my album below for the full set of pics.      For lunch we ordered around 150 pizzas from Mellow Mushroom, a local pizza place (notice the theme of supporting local businesses.)  Early on we were concerned about Mellow Mushroom being able to supply that many pizzas and get them delivered (still hot) to the theater, but they did an excellent job day of the event.  I wish I had gotten some pictures of the old school VW van they delivered the pizza in, but I was just a bit busy running around trying to get theaters ready for lunch.  We had attendees from last year who specifically requested that we have Mellow Mushroom supply lunch this year and I’m glad everything worked out being able to use them again.     During the afternoon I was able to attend a few sessions and hear some great content from various speakers.  It was also nice to just sit down and get off my feet for a bit.  After the last sessions the day concluded with a raffle.  There were a few logistical and technical issues that hampered our ability to smoothly conduct the raffle.  To those of you that agree the raffle wasn’t the smoothest experience I would like to say that the Stir Trek planning committee has already begun meeting to discuss ways of improving the conference for next year.  We are also accepting feedback (both positive and negative) at the following link: click here.  If you don’t wish to use the Joind In site you can also email me directly and I’ll be sure to pass along the feedback.   Iron Man 2 Movie     Last but not least, what Stir Trek event would be complete without the feature movie.  This year’s movie was Iron Man 2.  The theater had some really cool props and promotions (see pic below) for the movie.  I really enjoyed Iron Man 2, but I would recommend brushing up on the Iron Man comics and Marvel’s plans for future movies to understand some of the plot elements that come up.  Also make sure you stay through to the end of the movie credits to see a sneak peak of something special, that’s all I’ll say. Conclusion     Again a big thanks goes out to all of the speakers, sponsors, attendees, movie theater staff, volunteers, and everyone else involved in making this event great.  Also big thanks to my fellow Stir Trek planning committee members: Jeff Blankenburg, Matt Casto, Carey Payette, Jody Morgan, Rick Kierner, and Sarah Dutkiewitcz.  I am grateful for everything I learned while helping plan this event and look forward to being involved again next year.  For those interested we are currently targeting Thor as our movie theme for 2011 and then The Avengers for 2012.  These are tentative based on release dates that could shift as we get closer, but for now look solid.   Photos Pics on Facebook (includes tagging)     Stir Trek: Iron Man Edition photos on Facebook Pics on Live site (higher res)      View Full Album         -Frog Out

    Read the article

  • WebLogic 12.1.2 launch webcast on-demand & WebLogic Community feedback

    - by JuergenKress
    You missed the WebLogic & Coherence & JDeveloper 12.1.2 launch Webcast? Watch it on-demand: View On-Demand Version Read the Q&A from this Webcast Special thanks for Frank Munz and Simon Haslams our WebLogic Community experts on the phone!Thanks for the community for the great twitter feedback send us your tweets @wlscommunity #WebLogicCommunity WebLogic Community Join the #WebLogic Partner Community for the latest WebLogic 12.1.2 details and upcoming trainings http://www.WeblogicCommunity.com #OracleCAF Oracle WebLogic ?Unified update, patch, install process is a key component in reducing Ops cost in #WebLogic 12c #OracleCAF WebLogic Community Demo time #WebLogic cluster creation in seconds #OracleCAF by @mike_lehmann & Will Lyons #WebLogicCommunity pic.twitter.com/gyb8YqnKco Oracle WebLogic ?Dynamic server clusters to scale apps - coming up in #WebLogic 12c launch. #OracleCAF http://pub.vitrue.com/lBmE Oracle WebLogic ?Key feature of #WebLogic 12.1.2 release: @Oracle Database 12c integration. #OracleCAF #OracleDB OTNArchBeat ?Many tech posts on #weblogic available on #oracleace Rene van Wijk's blog. #OracleCAF http://pub.vitrue.com/O9Cn Frank Munz ?Correct me if I am wrong, but this could be the first WebLogic 12.1.2 training ever: http://www.ausoug.org.au/insync13/insync13-frank-munz.html … Cloud Foundation ?.#WebLogic 12.1.2 deep dive starts NOW during #OracleCAF launch. #Coherence up next in a few minutes. http://pub.vitrue.com/HPHM Maciej Gruszka ?Watch http://www.youtube.com/watch?v=KiCoO_QGBsU&feature=c4-overview&list=UUrEIV9YO17leE9aJWamKEPw … at #WebLogic channel with @dave_cabelus about Elastic JMS Oracle WebLogic ?Pick up the new book by @frankmunz on WLS 12c http://amzn.to/1ceppgZ #WebLogic #OracleCAF OTNArchBeat ?@OTNArchBeat 31 Jul @frankmunz 's #WebLogic YouTube channel >> watch and learn #OracleCAF http://pub.vitrue.com/B4IM WebLogic Community ?@frankmunz WebLogic expert build elastic clouds with #WebLogic http://www.munzandmore.com/blog #OracleCAF #WebLogicCommunity pic.twitter.com/UK5UKjXUVl OTNArchBeat @frankmunz 's blog, covering #weblog #cloud and more #OracleCAF http://pub.vitrue.com/N8ST OTNArchBeat ?oracladmin: @simon_haslam 's Oracle Fusion Middleware blog #OracleCAF #oracleace http://pub.vitrue.com/cwGx Yuri Grinshteyn ?Coherence uses WLS tooling, including deployment, and can be part of the WLS cluster. Well done there. #OracleCAF Maciej Gruszka ?#Coherence 12.1.2 auto updates data grid on changes inside DB thru #GoldenGate HotCache - another cool feature of #OracleCAF Oracle WebLogic ?From #OracleCAF launch: Tight integration tween WLS, #Coherence and #OracleDB. Dynamic clusters, OSS support & more http://pub.vitrue.com/3NL9 OTNArchBeat ?25 recent no-fluff technical articles on Oracle WebLogic #OracleCAF http://pub.vitrue.com/FEG5 Maciej Gruszka ?@dave_cabelus Elastic JMS is my favourite capability of #WebLogic 12.1.2 WebLogic Community ?Dynamic WebLogic Clustering COOL - what is Wour favorite 12.1.2 feature? #OracleCAF #WebLogicCommunity pic.twitter.com/T8lvDMJ1U0 WebLogic Community ?What is the coolest #WebLogic 12.1.2 feature? Let us know @wlscommunity http://weblogiccommunity.com/2013/07/30/launch-webcast-weblogic-coherence-jdeveloper-adf-12-1-2-00-july-31st-2013/ … #WebLogicCommunity Simon Haslam ?I'm speaking(!) on the panel session with @frankmunz & Matt Rosen on the CAF/WebLogic 12.1.2 launch: 6pm UK today https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=651242&partnerref=CAF_Launch_OCOM_07312013&sourcepage=register … Markus Eisele ?#WebLogic 12.1.2 - an Important New Release for Middleware Admins http://bit.ly/1cmtqhX by @simon_haslam OracleEnterpriseMgr ?The JVM diagnostics features of #EM12c are now shown in a demo by @hawkinsg1 at the #OracleCAF launch http://bit.ly/caflaunch Shaun Smith ?Curious about the new #Coherence 12.1.2 GoldenGate HotCache feature? I explain all on youtube: http://www.youtube.com/watch?v=O0TIG3hgbg0&feature=share&list=PLxqhEJ4CA3JtQwuPS8Qmd88lGX-gsIbHV … #OracleCAF Maciej Gruszka ?Try for Yourself -- Download the products Oracle WebLogic 12.1.2: http://www.oracle.com/technetwork/middleware/fusion-middleware/downloads/index.html … Oracle Coherence 12c: http://www.oracle.com/technetwork/middleware/coherence/downloads/index.htm … WebLogic Community ?What is Your favorite feature in #WebLogic 12.1.2 ? cool stuff! #OracleCAF #WebLogicCommunity http://WeblogicCommunity.com pic.twitter.com/xjR05tiaQj We encourage you to learn more about all the products by reviewing the following resources: Try for Yourself -- Download the products Oracle WebLogic 12.1.2 Oracle Coherence 12c Enterprise Manager Developer Tools WebLogic Community blog Learn more Read the Oracle WebLogic Business Whitepaper Read the Oracle Coherence Business Whitepaper Read the Oracle WebLogic and Oracle Database Integration Whitepaper Get Training from Oracle University Check out the Oracle WebLogic YouTube Channel Check out the Oracle Coherence YouTube Channel WebLogic Partner Community Registration The Webcast is available on-demand Watch Webcast Now WebLogic Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Weblogic 12.1.2,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Parner Webcast - Innovations in Products Program

    - by Richard Lefebvre
    We are pleased to invite you to join the Innovations in Products –webcast. Innovations in Products will present Oracle Applications' Product's new functions and features including sales positioning. The key objectives of these webcasts are to inspire System Integrator's implementation personnel to conduct successful after sales in their Customer projects. Innovations in Products will be presented on the 1st Monday of each quarter after the billable day (4:00 to 5:00 PM CET). The webcast is intended for System Integrator's Implementation Certified Specialists but Innovations in Products is open for other interested Oracle Applications system Integrator's personnel as well. At first, two Oracle representatives will discuss Oracle's contribution to Partners. Then you will see product breakout session followed by Q&A with Oracle Experts. Each session will last for maximum 1 hour. A Q&A document covering all questions and answers will be made available after the webcast. What are the Benefits for partners? Find out how Innovations in Products helps you to improve your after sales Discover new functions and features so you can enrich your Customers's solution Learn more about Oracle Applications products, especially sales positioning Hear crucial questions raised by colleague alike, learn from their interest Engage and present your questions to subject experts Be inspired of the richness of Oracle Application portfolio – for your and your customer’s benefit Note: Should you already be familiar with a specific Product, then choose another one. Doing so you would expand your knowledge of the overall Applications portfolio. Some presentations contain product demonstration, although these presentations are not intended to be extremely detailed technical presentations. Note: At the latter part of this email you have also 17 links into the recent Applications Products presentations and 6 links into the Public Sector Value Proposition presentations that were presented in Innovations in Industries -program. Product breakout sessions: Topics Speaker To Register Fusion Applications Technology and Extensibility: A next-generation platform that adapts to client needs. Matthew Johnson, Sr. Director, SCM Product Development, EMEA CLICK HERE Fusion Applications - Transforming your Back-Office Accounting Function: Changing how people work in back office functions to drive value add Liam Nolan, Director, ERP Product Development, EMEA CLICK HERE Fusion HCM & Talent Overview & Extensibility: A more in-depth look into a personalized HCM solution Synco Jonkeren, Vice-President HCM Product Development & Management, EMEA CLICK HERE Fusion HCM Compensation Planning: Compensate To Compete Rosie Warner, Director, HCM Sales Development CLICK HERE Enterprise PLM for the Product Value Chain: Oracle Enterprise PLM offers Industry specific solutions that cover the Product Value Chain Ulf Köster, Sales Development Leader Enterprise PLM, Oracle Western Europe CLICK HERE Oracle's Asset Management and Maintenance Solution: What you need to know to successfully implement Oracle Asset Management solutions within Oracle Installed Base Philip Carey, Asset Management and Maintenance Solution Specialist CLICK HERE For more details please visit Innovations in Products and other breakout sessions on OPN page. Delivery Format Innovations in Products –program is a series of FREE prerecorded Applications product presentations followed by Q&A. It will be delivered over the Web. Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. Innovations in Products consists of several parallel prerecorded product breakout sessions, each lasting for max. 1 hour. At first, two Oracle representatives will discuss Oracle’s contribution to Partners. Then you’ll see the product breakout sessions followed by Q&A with Oracle Experts. A Q&A document covering all questions and answers will be made available after the webcast. You can also see Innovations in Products afterwards as its content will be available online for the next 6-12 months. The next Innovations in Products web casts will be presented as follows: July 2nd 2012 October 1st 2012 January 14th 2013 April 8th 2013. Note: Depending on local network bandwidth please allow some seconds time the presentations to download. You might want to refresh your screen by pressing F5. Duration Maximum 1 hour For further information please contact me Markku Rouhiainen. Recent Innovations in Products presentations Applications Products presented on April the 2nd, 2012 Speaker To Register Fusion CRM: Effective, Efficient and Easy James Penfold , Senior Director, Applications Product Development and Product Management CLICK HERE Fusion HCM: Talent management overview performance, goals, talent review Jaime Losantos Viñolas, Director, HCM Sales Development CLICK HERE Distributed Order Management - Fusion SCM Solution Vikram K Singla, Business Development Director, Supply Chain Management Applications, UK CLICK HERE Oracle Transportation Management Dominic Regan, Senior Director Oracle Transportation Management EMEA CLICK HERE Oracle Value Chain Planning: Demantra Sales & Operation Planning and Demantra Demand Management Lionel Albert, Senior Director Value Chain Planning, EMEA CLICK HERE Oracle CX (Customer Experience) - formerly CEM: Powering Great Customer Experiences Maria Ramirez , CRM Presales Consultant, EPC CLICK HERE EPM 11.1.2.2 Overview Nicholas Cox , EMEA Sales Development Director - Enterprise Performance Management CLICK HERE Oracle Hyperion Profitability and Cost Management, 11.1.2.1 Daniela Lazar , Senior EPM Sales Consultant, EPC CLICK HERE January the 16th 2012 Speaker To Register CRM / ATG: Best-in-Class CRM & Commerce Maria Ramirez , Associate CRM Presales Consultant, EPC CLICK HERE CRM / Automate Business Rules for Maximum Efficiency with OPA (Oracle Policy Automation) Marco Nilo, Associate CRM Presales Consultant, EPC CLICK HERE CRM / InQuira Toby Baker, Principal Sales Consultant, CRM Product Specialist Team CLICK HERE EPM / Business Intelligence Foundation Suite – Sales and Product Updates Liviu Nitescu, Senior BI Sales Consultant, EPC CLICK HERE EPM / Hyperion Planning 11.1.2.1 - Sales & Product Updates Andreea Voinea, EPM Sales Consultant, EPC CLICK HERE ERP / JDE EnterpriseOne Fulfillment Management Overview Mirela Andreea Nasta , ERP Presales Consultant, EPC CLICK HERE ERP / Spotlights on iExpenses Elena Nita ,ERP Presales Consultant, EPC CLICK HERE MDM / Master Data Management Martin Boyd , Senior Director Product Strategy CLICK HERE Product break through session Fusion Applications Human Capital Management Rosie Warner , Director, HCM Sales Development CLICK HERE Recent Innovations in Industries Value Proposition presentations January the 16th 2012 Speaker To Register Process Modernisation Iemke Idsingh Public Sector Solutions Director CLICK HERE Shared Services Ann Smith Business Development Director, Shared Services CLICK HERE Strengthening Financial Discipline Whilst Delivering Cashable Savings Philippa Headley UK Sales Development Director Public Sector - EPM Solutions CLICK HERE Social Welfare Industry Solutions Christian Wernberg-Tougaard Industry Director - Social Welfare CLICK HERE Police Industry Solutions Jeff Penrose Solution Sales Director CLICK HERE Tax and Revenue Management Industry Solutions Andre van der Post Global Director - Tax Solutions and Strategy CLICK HERE  

    Read the article

  • Subscribable World Cup 2010 Calendar

    - by jamiet
    I bang on quite a lot on this blog about ways in which data can get published over the web and one of the most interesting ways, in my opinion, of publishing data in a structured manner that is well understood is to use the iCalendar specification. There isn’t much information in the world that doesn’t have some concept of “when” so iCalendar is a great way of distributing that information. You have probably used iCalendar at some point without even knowing about it. All files with a .ics suffix are iCalendar format files and that is why you can happily import them into Outlook, Hotmail Calendar, Google Calendar etc… where they can be parsed and have the semantic data (when, where and who) extracted from them. Importing of iCalendar format data is really only half the trick though; in my opinion the real value of iCalendar-formatted calendar is the ability to subscribe to them. Subscribing has a simple benefit over importing but that single benefit is of massive importance: a subscriber to an iCalendar calendar can periodically check to see if any updates have been made and, if they have, automatically update the local copy. The real benefit to the user is the productivity gain – a single update to an iCalendar means that all subscribers are automatically made aware of the change and there is zero effort on the part of the subscriber; as my former colleague Howard van Rooijen is fond of saying, “work smarter not harder” – nowhere is this edict more ably demonstrated than subscribing versus importing of calendars. If you want to read some more thoughts about iCalendar then go and read my past blog post Calendar syndication - My big hope for 2009's breakthrough technology or better still go and seek out Jon Udell who speaks very authoritatively on the issue of iCalendar. With this subject of iCalendar on my mind I was interested to discover (via Steve Clayton’s blog post Download the world cup fixtures) that the BBC had made a .ics file available containing all of the matches in the upcoming World Cup. As you can probably guess this was a file that was made available so that it could be imported into your calendar of choice. It had one obvious downside though, right now nobody knows who is going to be playing in the knock-out stages so the calendar looks like this: with no teams being named after 25th June. How much more useful would this calendar have been if the BBC had made it possible to subscribe to the calendar instead, thus the calendar could be updated with the teams for the knock out stages when they are known and every subscriber would have a permanently up-to-date record of all the fixtures in their calendar. Better still, the calendar could be updated with match results as well or perhaps even post a match report from the BBC sport pages; when calendars are made subscribable a sea of opportunity opens up for distribution of information. So with that in mind I have decided to go one better than the BBC. I have imported their .ics into a brand new Hotmail calendar and made it publicly available at the following URLs: HTML http://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/index.html iCalendar webcal://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/calendar.ics The link you’re really interested in is the second one - click on that and it should open up in your calendar software of choice. Or, if you want to view it in an online calendar such as Hotmail Calendar or Google Calendar, copy and paste that URL into the appropriate place. Some people have told me they’re having trouble with the iCalendar link in which case hit the HTML link and then click “View ICS” at the resultant web page: I shall endeavour to keep the calendar updated throughout the World Cup and even if I don’t you’re no worse off than if you had imported the BBC’s .ics file so why not give it a try? If I do keep it up to date then you will have a permanent record of the 2010 World Cup available in your calendar. Forever. If you have your calendar synced to your smartphone then you’ll be carrying match reports around with you without you having to do a single thing. Surely that’s worth a quick click isn’t it?   If you have any thoughts let me have them in the comments below. Thanks for reading. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Walmart and Fusion Apps

    - by ultan o'broin
    Photograph: Misha Vaughan I attended Fusion Apps (yes, I know I am supposed to say "Oracle Fusion Applications", but stuffy old style guides are a turn-off in interwebs conversations) User Experience Advocate (FXA) training in Long Beach, California last week; a suitable location as ODTUG KSCOPE 11 was kicking off and key players were in the area. As a member of Oracle's Apps-UX team I know the Fusion Apps messaging, natch, and done some other Fusion Apps go-to-market content work too. For the messaging details themselves, see Lonneke Dikmans (@lonnekedikmans) great blog, by the way. However, I wanted some 'formal' training combined with the opportunity to meet and learn from people already out there delivering those messages. The idea in me reaching out to Misha Vaughan, Apps-UX FXA maven, to get me onto this training was that in addition to my UX knowledge, I could leverage my location in EMEA and hit up customer events more quickly and easily. Those local user groups do like to hear the voice of locals too you know (so I need to work on that mid-Atlantic accent). I'm looking forward to such opportunities. The training was all smashing stuff, just the right level of detail, delivered professionally and with great style and humor. I was especially honored to be paired off for my er, coaching with Debra Lilley (@debralilley), who shared with everyone all kinds of tips and insights from her experiences of delivering the message and demo. For me, that was the real power of the FXA event--the communal, conversational aspect--the meeting up with people who had done all this for real, the sharing in their experiences, while learning along with other newbies. Sorry, but that all-important social aspect doesn't work so well with remote meetings. Katie Candland (Apps-UX) gave us a great tour of the Fusion Apps demo and included some useful presentational tips too (any excuse to buy that iPad). It's clear to me that the Fusion Apps messaging and demos really come alive with real-world examples that local application users will recognize, and I picked up some "yes, that's my job made easier" scene-stealers from Debra and Karen Brownfield too, to add to the great ones already provided. This power of examples shouldn't surprise anyone, they've long been a mainstay of applications user assistance, popular with users. We'll offer customers different types of example topics in the Fusion Apps online help too (stay tuned), and we know from research how important those 3S's (stories, scenarios, and simulations) are to users when they consume and apply information. Well, we've got the simulation, now it's time for more stories and scenarios. If you get a chance to participate in an FXA event (whether you are an Oracle employee or otherwise), I'd encourage it. It's committing your time and energy for sure, but I got real bang for the buck from it for my everyday job too. Listening to the room's feedback on the application demo really brought our internal design work to life, and I picked up on some things that I need to follow up on (like how you alphabetically sort stuff in other languages). User experience is after all, about users. What will I be doing next, and what would I like to see happen? Obviously, I need to develop my story-telling links with the people I met in Long Beach and do some practicing with the materials, and then get out there and deliver them at a suitable location. The demo is what it is right now, and that's a super-rich demo that I know everyone will want to see and ask questions about. Then, as mentioned by attendees at the FXA event, follow up on those translated and localized messages for EMEA (and APAC), that deal with different statutory or reporting requirements of the target markets. Given my background I would say that, wouldn't I? However, language is part of the UX, and international revenue is greater than US-only revenue for Oracle, so yes dear, we all need to get over the fact that enterprise apps users don't all speak, or want to speak, American-English. Most importantly perhaps, the continued development of a strong messaging community between Oracle and partners and customers where we can swap and share those FXA messaging stories and scenarios about Fusion Apps in a conversational way. The more the better, a combination of online and face-to-face meetings. I must also mention the great dinner after the event at Parker's Lighthouse, and the fun myself and Andrew Gilmour (Apps-UX) had at our end of the table talking about just about everything except Fusion Apps with Ronald Van Luttikhuizen and Ben Prusinski (who now understands the difference between Cork and Dublin people. I hope). Thanks to all the Apps-UXers who helped bring the FXA training to town, and to Debra and all the others that I am too jetlagged to mention right who were instrumental in making it happen for me. Here's to the next one. And the Walmart angle? That was me doing my Robert Scoble (ScO'bilizer?)-style guerilla smart phone research in Walmart in Long Beach, before the FXA event. It's all about stories for me. You can read more about it on the appslab blog (see the comments).

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • Understanding SingleTableEntityPersister n QueryLoader

    - by Iapilgrim
    Hi, I have the Hibernate model @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Node extends StatefulEntity implements Inheritable, Cloneable { private Node _parent; private List<Node> _childNodes; .. } @Cache(usage = CacheConcurrencyStrategy.NONE, region = SitesConstants.CACHE_REGION) public class Page extends Node implements Defaultable, Securable { private RootZone _rootZone; ...... @OneToOne(fetch = FetchType.LAZY) @JoinColumn(name = "root_zone_id", insertable = false, updatable = false) public RootZone getRootZone() { return _rootZone; } public void setRootZone(RootZone rootZone) { if (rootZone != null) { rootZone.setPageId(this.getId()); _rootZone = rootZone; } } I want to get all pages ( call getSiteTree), so I using this query String hpql = "SELECT n FROM Node n "; See the trace I find Page.setRootZone(RootZone) line: 155 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 BasicPropertyAccessor$BasicSetter.set(Object, Object, SessionFactoryImplementor) line: 66 PojoEntityTuplizer(AbstractEntityTuplizer).setPropertyValues(Object, Object[]) line: 352 PojoEntityTuplizer.setPropertyValues(Object, Object[]) line: 232 SingleTableEntityPersister(AbstractEntityPersister).setPropertyValues(Object, Object[], EntityMode) line: 3580 TwoPhaseLoad.initializeEntity(Object, boolean, SessionImplementor, PreLoadEvent, PostLoadEvent) line: 152 QueryLoader(Loader).initializeEntitiesAndCollections(List, Object, SessionImplementor, boolean) line: 877 QueryLoader(Loader).doQuery(SessionImplementor, QueryParameters, boolean) line: 752 QueryLoader(Loader).doQueryAndInitializeNonLazyCollections(SessionImplementor, QueryParameters, boolean) line: 259 QueryLoader(Loader).doList(SessionImplementor, QueryParameters) line: 2232 QueryLoader(Loader).listIgnoreQueryCache(SessionImplementor, QueryParameters) line: 2129 QueryLoader(Loader).list(SessionImplementor, QueryParameters, Set, Type[]) line: 2124 QueryLoader.list(SessionImplementor, QueryParameters) line: 401 QueryTranslatorImpl.list(SessionImplementor, QueryParameters) line: 363 HQLQueryPlan.performList(QueryParameters, SessionImplementor) line: 196 SessionImpl.list(String, QueryParameters) line: 1149 QueryImpl.list() line: 102 QueryImpl.getResultList() line: 67 NodeDaoImpl.getSiteTree(long) line: 358 PageNodeServiceImpl.getSiteTree(long) line: 797 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: 39 DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: 25 Method.invoke(Object, Object...) line: 597 AopUtils.invokeJoinpointUsingReflection(Object, Method, Object[]) line: 307 JdkDynamicAopProxy.invoke(Object, Method, Object[]) line: 198 $Proxy100.getSiteTree(long) line: not available the calling setRootZone in Page makes Hibernate issue a hit to database. I don't want this. So my question is + Why query String hpql = "SELECT n FROM Node n "; issues un-expected trace logs like above. Why the query String hpql = "SELECT n.nodename FROM Node n " not? What is the mechanism behind? Note: Im using hibernate caching level 2. In case I don't want to see that trace logs. I mean I just get Node data only. How to do ? Thanks for your help. Sorry for my bad english :( Van

    Read the article

  • windows phone deserialization json

    - by user2042227
    I have a weird issue. so I am making a few calls in my app to a webservice, which replies with data. However I am using a token based login system, so the first time the user enters the app I get a token from the webservice to login for that specific user and that token returns only that users details. The problem I am having is when the user changes I need to make the calls again, to get the new user's details, but using visual studio's breakpoint debugging, it shows the new user's token making the call however the problem is when the json is getting deserialized, it is as if it still reads the old data and deserializes that, when I exit my app with the new user it works fine, so its as if it is reading cached values, but I have no idea how to clear it? I am sure the new calls are being made and the problem lies with the deserializing, but I have tried clearing the values before deserializing them again, however nothing works. am I missing something with the json deserializer, how van I clear its cached values? here I make the call and set it not to cache so it makes a new call everytime: client.Headers[HttpRequestHeader.CacheControl] = "no-cache"; var token_details = await client.DownloadStringTaskAsync(uri); and here I deserialize the result, it is at this section the old data gets shown, so the raw json being shown inside "token_details" is correct, only once I deserialize the token_details, it shows the wrong data. deserialized = JsonConvert.DeserializeObject(token_details); and the class I am deserializing into is a simple class nothing special happening here, I have even tried making the constructor so that it clears the values each time it gets called. public class test { public string status { get; set; } public string name{ get; set; } public string birthday{ get; set; } public string errorDes{ get; set; } public test() { status = ""; name= ""; birthday= ""; errorDes= ""; } } uri's before making the calls: {https://whatever.co.za/token/?code=BEBCg==&id=WP7&junk=121edcd5-ad4d-4185-bef0-22a4d27f2d0c} - old call "UBCg==" - old reply {https://whatever.co.za/token/?code=ABCg==&id=WP7&junk=56cc2285-a5b8-401e-be21-fec8259de6dd} - new call "UBCg==" - new response which is the same response as old call as you can see i did attach a new GUID everytime i make the call, but then the new uri is read before making the downloadstringtaskasync method call, but it returns with the old data

    Read the article

  • Custom dynamic error pages in Ruby on Rails not working

    - by PlanetMaster
    Hi, I'm trying to implement custom dynamic error pages following this post: http://www.perfectline.co.uk/blog/custom-dynamic-error-pages-in-ruby-on-rails I did exactly what the blog post says. I included config.action_controller.consider_all_requests_local = false in my environment.rb. But is not working. My browser shows: Routing Error No route matches "/555" with {:method=>:get} So, it looks like the rescues are not fired. I get the following in my log file: ActionController::RoutingError (No route matches "/555" with {:method=>:get}): Rendering rescues/layout (not_found) Is there some routing interfering with the code? I'm not sure what to look for. I'm running rails 2.3.5. Here is the routes.rb file: ActionController::Routing::Routes.draw do |map| # routing van property-url map.connect 'buy/:property_type_plural/:province/:city/:address/:house_number', :controller => 'properties' , :action => 'show', :id => 'whatever' map.myimmonatie 'myimmonatie' , :controller => 'myimmonatie/properties', :action => 'index' map.login "login", :controller => "user_sessions", :action => "create", :conditions => {:method => :post} map.login "login", :controller => "user_sessions", :action => "new" map.logout "logout", :controller => "user_sessions", :action => "destroy" map.buy "buy", :controller => 'buy' map.sell "sell", :controller => 'sell' map.home "home", :controller => 'home' map.disclaimer "disclaimer", :controller => 'disclaimer' map.sign_up "sign_up", :controller => 'users', :action => :new map.contact "contact", :controller => 'contact' map.resources :user_sessions map.resources :contact map.resources :password_resets map.resources :messages map.resources :users, :only => [:index,:new,:create,:activate,:edit,:profile,:password] map.resources :images map.resources :activation , :only => [:new,:resend] map.resources :email map.resources :properties, :except => [:index,:destroy] map.namespace :admin do |admin| admin.resources :users admin.resources :properties admin.resources :order_items, :as => :orders admin.resources :blog_posts, :as => :blog end map.connect 'myimmonatie/:action' , :controller => 'users', :id => 'current', :requirements => {:action => /(profile)|(password)|(email)/} map.namespace :myimmonatie do |myimmonatie| myimmonatie.resources :messages, :controller => 'messages' myimmonatie.resources :password, :as => "password", :controller => 'users', :action => 'password' myimmonatie.resources :properties , :controller => 'properties' myimmonatie.resources :orders , :only => [:index,:show,:create,:new] end map.root :controller => "home" map.connect ':controller/:action' map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end ActionController::Routing::Translator.translate_from_file('config','i18n-routes.yml')

    Read the article

  • I could use some help with my SQL command

    - by SuperSpy
    I've got a database table called 'mesg' with the following structure: receiver_id | sender_id | message | timestamp | read Example: 2 *(«me)* | 6 *(«nice girl)* | 'I like you, more than ghoti' | yearsago | 1 *(«seen it)* 2 *(«me)* | 6 *(«nice girl)* | 'I like you, more than fish' | now | 1 *(«seen it)* 6 *(«nice girl)* | 2 *(«me)* | 'Hey, wanna go fish?' | yearsago+1sec | 0 *(«she hasn't seen it)* It's quite a tricky thing that I want to achieve. I want to get: the most recent message(=ORDER BY time DESC) + 'contact name' + time for each 'conversation'. Contact name = uname WHERE uid = 'contact ID' (the username is in another table) Contact ID = if(sessionID*(«me)*=sender_id){receiver_id}else{sender_id} Conversation is me = receiver OR me = sender For example: From: **Bas Kreuntjes** *(« The message from bas is the most recent)* Hey $py, How are you doing... From: **Sophie Naarden** *(« Second recent)* Well hello, would you like to buy my spam? ... *(«I'll work on that later >p)* To: **Melanie van Deijk** *(« My message to Melanie is 3th)* And? Did you kiss him? ... That is a rough output. QUESTION: Could someone please help me setup a good SQL command. This will be the while loup <?php $sql = "????"; $result = mysql_query($sql); while($fetch = mysql_fetch_assoc($result)){ ?> <div class="message-block"> <h1><?php echo($fetch['uname']); ?></h1> <p><?php echo($fetch['message']); ?></p> <p><?php echo($fetch['time']); ?></p> </div> <?php } ?> I hope my explanation is good enough, if not, please tell me. Please don't mind my English, and the Dutch names (I am Dutch myself) Feel free to correct my English UPDATE1: Best I've got until now: But I do not want more than one conversation to show up... u=user table m=message table SELECT u.uname, m.message, m.receiver_uid, m.sender_uid, m.time FROM m, u WHERE (m.receiver_uid = '$myID' AND u.uid = m.sender_uid) OR (m.sender_uid = '$myID' AND u.uid = m.receiver_uid) ORDER BY time DESC;

    Read the article

  • jQuery: resizing element cuts off parent's background

    - by Justine
    Hi, I've been trying to recreate an effect from this tutorial: http://jqueryfordesigners.com/jquery-look-tim-van-damme/ Unfortunately, I want a background image underneath and because of the resize going on in JavaScript, it gets resized and cut off as well, like so: http://dev.gentlecode.net/dotme/index-sample.html - you can view source there to check the HTML, but basic structure looks like this: <div class="page"> <div class="container"> div.header ul.nav div.main </div> </div> Here is my jQuery code: $('ul.nav').each(function() { var $links = $(this).find('a'), panelIds = $links.map(function() { return this.hash; }).get().join(","), $panels = $(panelIds), $panelWrapper = $panels.filter(':first').parent(), delay = 500; $panels.hide(); $links.click(function() { var $link = $(this), link = (this); if ($link.is('.current')) { return; } $links.removeClass('current'); $link.addClass('current'); $panels.animate({ opacity : 0 }, delay); $panelWrapper.animate({ height: 0 }, delay, function() { var height = $panels.hide().filter(link.hash).show().css('opacity', 1).outerHeight(); $panelWrapper.animate({ height: height }, delay); }); }); var showtab = window.location.hash ? '[hash=' + window.location.hash + ']' : ':first'; $links.filter(showtab).click(); }); In this example, panelWrapper is a div.main and it gets resized to fit the content of tabs. The background is applied to the div.page but because its child is getting resized, it resizes as well, cutting off the background image. It's hard to explain so please look at the link above to see what I mean. I guess what I'm trying to ask is: is there a way to resize an element without resizing its parent? I tried setting height and min-height of .page to 100% and 101% but that didn't work. I tried making the background image fixed, but nada. It also happens if I add the background to the body or even html. Help?

    Read the article

< Previous Page | 26 27 28 29 30 31 32  | Next Page >