Search Results

Search found 41295 results on 1652 pages for 'right to left'.

Page 264/1652 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Slide Creation Checklist

    - by Daniel Moth
    PowerPoint is a great tool for conference (large audience) presentations, which is the context for the advice below. The #1 thing to keep in mind when you create slides (at least for conference sessions), is that they are there to help you remember what you were going to say (the flow and key messages) and for the audience to get a visual reminder of the key points. Slides are not there for the audience to read what you are going to say anyway. If they were, what is the point of you being there? Slides are not holders for complete sentences (unless you are quoting) – use Microsoft Word for that purpose either as a physical handout or as a URL link that you share with the audience. When you dry run your presentation, if you find yourself reading the bullets on your slide, you have missed the point. You have a message to deliver that can be done regardless of your slides – remember that. The focus of your audience should be on you, not the screen. Based on that premise, I have created a checklist that I go over before I start a new deck and also once I think my slides are ready. Turn AutoFit OFF. I cannot stress this enough. For each slide, explicitly pick a slide layout. In my presentations, I only use one Title Slide, Section Header per demo slide, and for the rest of my slides one of the three: Title and Content, Title Only, Blank. Most people that are newbies to PowerPoint, get whatever default layout the New Slide creates for them and then start deleting and adding placeholders to that. You can do better than that (and you'll be glad you did if you also follow item #11 below). Every slide must have an image. Remove all punctuation (e.g. periods, commas) other than exclamation points and question marks (! ?). Don't use color or other formatting (e.g. italics, bold) for text on the slide. Check your animations. Avoid animations that hide elements that were on the slide (instead use a new slide and transition). Ensure that animations that bring new elements in, bring them into white space instead of over other existing elements. A good test is to print the slide and see that it still makes sense even without the animation. Print the deck in black and white choosing the "6 slides per page" option. Can I still read each slide without losing any information? If the answer is "no", go back and fix the slides so the answer becomes "yes". Don't have more than 3 bullet levels/indents. In other words: you type some text on the slide, hit 'Enter', hit 'Tab', type some more text and repeat at most one final time that sequence. Ideally your outer bullets have only level of sub-bullets (i.e. one level of indentation beneath them). Don't have more than 3-5 outer bullets per slide. Space them evenly horizontally, e.g. with blank lines in between. Don't wrap. For each bullet on all slides check: does the text for that bullet wrap to a second line? If it does, change the wording so it doesn't. Or create a terser bullet and make the original long text a sub-bullet of that one (thus decreasing the font size, but still being consistent) and have no wrapping. Use the same consistent fonts (i.e. Font Face, Font Size etc) throughout the deck for each level of bullet. In other words, don't deviate form the PowerPoint template you chose (or that was chosen for you). Go on each slide and hit 'Reset'. 'Reset' is a button on the 'Home' tab of the ribbon or you can find the 'Reset Slide' menu when you right click on a slide on the left 'Slides' list. If your slides can survive doing that without you "fixing" things after the Reset action, you are golden! For each slide ask yourself: if I had to replace this slide with a single sentence that conveys the key message, what would that sentence be? This exercise leads you to merge slides (where the key message is split) or split a slide into many, if there were too many key messages on the slide in the first place. It can also lead you to redesign a slide so the text on it really is just explanation or evidence for the key message you are trying to convey. Get the length right. Is the length of this deck suitable for the time you have been given to present? If not, cut content! It is far better to deliver less in a relaxed, polished engaging, memorable way than to deliver in great haste more content. As a rule of thumb, multiply 2 minutes by the number of slides you have, add the time you need for each demo and check if that add to more than the time you have allotted. If it does, start cutting content – we've all been there and it has to be done. As always, rules and guidelines are there to be bent and even broken some times. Start with the above and on a slide-by-slide basis decide which rules you want to bend. That is smarter than throwing all the rules out from the start, right? Comments about this post welcome at the original blog.

    Read the article

  • Help Prevent Carpal Tunnel Problems with Workrave

    - by Matthew Guay
    Whether for work or leisure, many of us spend entirely too much time on the computer everyday.  This puts us at risk of having or aggravating Carpal Tunnel problems, but thanks to Workrave you can help to divert these problems. Workrave helps Carpal Tunnel problems by reminding you to get away from your computer periodically.  Breaking up your computer time with movement can help alleviate many computer and office related health problems.  Workrave helps by reminding you to take short pauses after several minutes of computer use, and longer breaks after continued use.  You can also use it to keep from using the computer for too much You time in a day.  Since you can change the settings to suit you, this can be a great way to make sure you’re getting the breaks you need. Install Workrave on Windows If you’re using Workrave on Windows, download (link below) and install it with the default settings. One installation setting you may wish to change is the startup.  By default Workrave will run automatically when you start your computer; if you don’t want this, you can simply uncheck the box and proceed with the installation. Once setup is finished, you can run Workrave directly from the installer. Or you can open it from your start menu by entering “workrave” in the search box. Install Workrave in Ubuntu If you wish to use it in Ubuntu, you can install it directly from the Ubuntu Software Center.  Click the Applications menu, and select Ubuntu Software Center. Enter “workrave” into the search box in the top right corner of the Software Center, and it will automatically find it.  Click the arrow to proceed to Workrave’s page. This will give you information about Workrave; simply click Install to install Workrave on your system. Enter your password when prompted. Workrave will automatically download and install.   When finished, you can find Workrave in your Applications menu under Universal Access. Using Workrave Workrave by default shows a small counter on your desktop, showing the length of time until your next Micro break (30 second break), Rest break (10 minute break), and max amount of computer usage for the day. When it’s time for a micro break, Workrave will popup a reminder on your desktop. If you continue working, it will disappear at the end of the timer.  If you stop, it will start a micro-break which will freeze most on-screen activities until the timer is over.  You can click Skip or Postpone if you do not want to take a break right then. After an hour of work, Workrave will give you a 10 minute rest break.  During this it will show you some exercises that can help eliminate eyestrain, muscle tension, and other problems from prolonged computer usage.  You can click through the exercises, or can skip or postpone the break if you wish.   Preferences You can change your Workrave preferences by right-clicking on its icon in your system tray and selecting Preferences. Here you can customize the time between your breaks, and the length of your breaks.  You can also change your daily computer usage limit, and can even turn off the postpone and skip buttons on notifications if you want to make sure you follow Workrave and take your rests! From the context menu, you can also choose Statistics.  This gives you an overview of how many breaks, prompts, and more were shown on a given day.  It also shows a total Overdue time, which is the total length of the breaks you skipped or postponed.  You can view your Workrave history as well by simply selecting a date on the calendar.   Additionally, the Activity tab in the Statics pane shows more info about your computer usage, including total mouse movement, mouse button clicks, and keystrokes. Conclusion Whether you’re suffering with Carpal Tunnel or trying to prevent it, Workrave is a great solution to help remind you to get away from your computer periodically and rest.  Of course, since you can simply postpone or skip the prompts, you’ve still got to make an effort to help your own health.  But it does give you a great way to remind yourself to get away from the computer, and especially for geeks, this may be something that we really need! Download Workrave Similar Articles Productive Geek Tips Switch to the Dvorak Keyboard Layout in XPAccess Your MySQL Server Remotely Over SSHHow to Secure Gaim Instant Messenger traffic at Work with SecureCRT and SSHConnect to VMware Server Console Over SSHDisclaimers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • 2d tank movement and turret solution

    - by Phil
    Hi! I'm making a simple top-down tank game on the ipad where the user controls the movement of the tank with the left "joystick" and the rotation of the turret with the right one. I've spent several hours just trying to get it to work decently but now I turn to the pros :) I have two referencial objects, one for the movement and one for the rotation. The referencial objects always stay max two units away from the tank and I use them to tell the tank in what direction to move. I chose this approach to decouple movement and rotational behaviour from the raw input of the joysticks, I believe this will make it simpler to implement whatever behaviour I want for the tank. My problem is 1; the turret rotates the long way to the target. With this I mean that the target can be -5 degrees away in rotation and still it rotates 355 degrees instead of -5 degrees. I can't figure out why. The other problem is with the movement. It just doesn't feel right to have the tank turn while moving. I'd like to have a solution that would work as well for the AI as for the player. A blackbox function for the movement where the player only specifies in what direction it should move and it moves there under the constraints that are imposed on it. I am using the standard joystick class found in the Unity iPhone package. This is the code I'm using for the movement: public class TankFollow : MonoBehaviour { //Check angle difference and turn accordingly public GameObject followPoint; public float speed; public float turningSpeed; void Update() { transform.position = Vector3.Slerp(transform.position, followPoint.transform.position, speed * Time.deltaTime); //Calculate angle var forwardA = transform.forward; var forwardB = (followPoint.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 5) { //Rotate to transform.Rotate(new Vector3(0, (-turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 5) { transform.Rotate(new Vector3(0, (turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } transform.position = new Vector3(transform.position.x, 0, transform.position.z); } } And this is the code I'm using to rotate the turret: void LookAt() { var forwardA = -transform.right; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff - 180 > 1) { //Rotate to transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff - 180 < -1) { transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); print((angleDiff - 180).ToString()); } else { } } Since I want the turret reference point to turn in relation to the tank (when you rotate the body, the turret should follow and not stay locked on since it makes it impossible to control when you've got two thumbs to work with), I've made the TurretFollowPoint a child of the Turret object, which in turn is a child of the body. I'm thinking that I'm making it too difficult for myself with the reference points but I'm imagining that it's a good idea. Please be honest about this point. So I'll be grateful for any help I can get! I'm using Unity3d iPhone. Thanks!

    Read the article

  • Design a T-shirt for .NET Reflector Pro

    - by Laila
    Win a .NET Reflector Pro license, a box of Red Gate goodies, and a t-shirt printed with your design! Red Gate likes t-shirts. Each of our teams has one. In fact, each individual person has one, numbered according to when they joined the company: Red Gate's 1st, 2nd, and so on right up to Red Gate's 170th, with the slogan "More than just a number". Those t-shirts are important, chiefly because they remind the people wearing them that they are important. But that isn't enough. What really makes us great are the people who choose to use our tools. So we'd like to extend our tradition of t-shirts to include you and put the design of our next shirt entirely in your hands. We'd like you to come up with a witty slogan or create an inventive or simply beautiful t-shirt design for .NET Reflector Pro, our add-in for Visual Studio, which allows you to step into decompiled assemblies whilst debugging in Visual Studio. When you're done, post your masterpiece to Twitter with the hash tag #reflectortees, and @redgate will take a look! We'll pick the best design, and the winner will get a licensed copy of .NET Reflector Pro and a box of Red Gate goodies - not to mention a copy of their t-shirt. The winning design will go into production and be worn and given out at tradeshows, conferences, and user group events across the world, proudly bearing the name of their designer. We'll also pick three runners-up who will receive licenses for .NET Reflector Pro. Red Gate goodie box Interested? If you're up for the challenge, then we've got some resources to get you started. Inside the .zip file you'll find high-quality versions of the following: T-shirt templates: don't forget to design the front and the back! Different versions of the .NET Reflector Pro logo and Red Gate logo. Colour sheets to give you an easy reference to the Red Gate colours, including hex and RGB values. You can create and send us as many designs as you like, and each of them will be considered for the prize. To submit your designs, simply tweet including the competition hash tag, #reflectortees, and a link to somewhere we can see your design: either an image hosting site such as Twitpic, Flickr or Picasa, or a personal blog. You will need to create a Twitter account (which is free), if you don't already have one. You only have three limits: The background colour of the t-shirt should be one of our brand colours (red, light/dark grey or black), though you're welcome to use other colours in the rest of the design. You need to make use of either the .NET Reflector Pro logo OR the Red Gate logo (please keep them as they are) If you include any text or slogan, stick with just one or two colors for it. Apart from that, go wild. Go and do whatever it is you do when you get creative: whether you walk barefoot on the grass with a pencil and paper, sit cross-legged on a pile of cushions with a laptop, or simply close your eyes and float through a mist of ideas, now is your chance. Make sure you enjoy it. We're looking forward to seeing your creations. Terms and conditions 1. The closing date for entries is June 11th, 2010 (4 p.m. UK time). Red Gate Software Ltd reserves the right to extend the competition deadline at its discretion. If there is a revision, the revised date will be published on this blog and the date for announcing the results will be postponed accordingly. 2. The winning designer will be notified on June 14th, 2010 through Twitter. The winner must claim his/her prize by sending us a high-resolution image of their design via email (i.e. Illustrator EPS files or appropriate format, ideally at 300dpi). If the winner does not come forward within 3 days of the announcement, they will forfeit their prize and another winner will be selected from the runners-up. The names of the winner and runners-up will be posted on this blog by June 18th.  3. Entry is completed on the designer posting a link to their entry in a tweet with the correct hash tag, #reflectortees. 4. Red Gate Software needs to hold the rights to using the winning design in order to put the t-shirt into production. We will make sure that this is fine with the winner before we do so, but if you do not want us holding the rights to your design, please do not submit your designs. We reserve the right to slightly alter or adjust any artwork we decide to use (mainly to make it easier to print), but we will make sure we contact the winner for approval first. The winner will also need to allow us the use of his/her name for purposes of promoting your design. 5. Entries must be entirely your own original work and must not breach any copyright or third party rights. Red Gate Software Ltd will not be made partially or fully liable for any non-original work submitted by you. 6. This competition is free: you do not need to buy anything or be an existing customer to enter. 7. This competition is not open to employees of Red Gate Software Ltd, their families, or any other company directly connected with the administration of this promotion.

    Read the article

  • 'Content' is NOT 'Text' in XAML

    - by psheriff
    One of the key concepts in XAML is that the Content property of a XAML control like a Button or ComboBoxItem does not have to contain just textual data. In fact, Content can be almost any other XAML that you want. To illustrate here is a simple example of how to spruce up your Button controls in Silverlight. Here is some very simple XAML that consists of two Button controls within a StackPanel on a Silverlight User Control. <StackPanel>  <Button Name="btnHome"          HorizontalAlignment="Left"          Content="Home" />  <Button Name="btnLog"          HorizontalAlignment="Left"          Content="Logs" /></StackPanel> The XAML listed above will produce a Silverlight control within a Browser that looks like Figure 1.   Figure 1: Normal button controls are quite boring. With just a little bit of refactoring to move the button attributes into Styles, we can make the buttons look a little better. I am a big believer in Styles, so I typically create a Resources section within my user control where I can factor out the common attribute settings for a particular set of controls. Here is a Resources section that I added to my Silverlight user control. <UserControl.Resources>  <Style TargetType="Button"         x:Key="NormalButton">    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="MinWidth"            Value="50" />    <Setter Property="Margin"            Value="10" />  </Style></UserControl.Resources> Now back in the XAML within the Grid control I update the Button controls to use the Style attribute and have each button use the Static Resource called NormalButton. <StackPanel>  <Button Name="btnHome"          Style="{StaticResource NormalButton}"          Content="Home" />  <Button Name="btnLog"          Style="{StaticResource NormalButton}"          Content="Logs" /></StackPanel> With the additional attributes set in the Resources section on the Button, the above XAML will now display the two buttons as shown in Figure 2. Figure 2: Use Resources to Make Buttons More Consistent Now let’s re-design these buttons even more. Instead of using words for each button, let’s replace the Content property to use a picture. As they say… a picture is worth a thousand words, so let’s take advantage of that. Modify each of the Button controls and eliminate the Content attribute and instead, insert an <Image> control with the <Button> and the </Button> tags. Add a ToolTip to still display the words you had before in the Content and you will now have better looking buttons, as shown in Figure 3.   Figure 3: Using pictures instead of words can be an effective method of communication The XAML to produce Figure 3 is shown in the following listing: <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Home.jpg" />  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <Image Style="{StaticResource NormalImage}"            Source="Images/Log.jpg" />  </Button></StackPanel> You will also need to add the following XAML to the User Control’s Resources section. <Style TargetType="Image"        x:Key="NormalImage">  <Setter Property="Width"          Value="50" /></Style> Add Multiple Controls to Content Now, since the Content can be whatever we want, you could also modify the Content of each button to be a StackPanel control. Then you can have an image and text within the button. <StackPanel>  <Button Name="btnHome"          ToolTipService.ToolTip="Home"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Home.jpg" />      <TextBlock Text="Home"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button>  <Button Name="btnLog"          ToolTipService.ToolTip="Logs"          Style="{StaticResource NormalButton}">    <StackPanel>      <Image Style="{StaticResource NormalImage}"              Source="Images/Log.jpg" />      <TextBlock Text="Logs"                  Style="{StaticResource NormalTextBlock}" />    </StackPanel>  </Button></StackPanel> You will need to add one more resource for the TextBlock control too. <Style TargetType="TextBlock"        x:Key="NormalTextBlock">  <Setter Property="HorizontalAlignment"          Value="Center" /></Style> All of the above will now produce the following:   Figure 4: Add multiple controls to the content to make your buttons even more interesting. Summary While this is a simple example, you can see how XAML Content has great flexibility. You could add a MediaElement control as the content of a Button and play a video within the Button. Not that you would necessarily do this, but it does work. What is nice about adding different content within the Button control is you still get the Click event and other attributes of a button, but it does necessarily look like a normal button. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled "Silverlight XAML for the Complete Novice - Part 1."

    Read the article

  • MD5 vertex skinning problem extending to multi-jointed skeleton (GPU Skinning)

    - by Soapy
    Currently I'm trying to implement GPU skinning in my project. So far I have achieved single joint translation and rotation, and multi-jointed translation. The problem arises when I try to rotate a multi-jointed skeleton. The image above shows the current progress. The left image shows how the model should deform. The middle image shows how it deforms in my project. The right shows a better deform (still not right) inverting a certain value, which I will explain below. The way I get my animation data is by exporting it to the MD5 format (MD5mesh for mesh data and MD5anim for animation data). When I come to parse the animation data, for each frame, I check if the bone has a parent, if not, the data is passed in as is from the MD5anim file. If it does have a parent, I transform the bones position by the parents orientation, and the add this with the parents translation. Then the parent and child orientations get concatenated. This is covered at this website. if (Parent < 0){ ... // Save this data without editing it } else { Math3::vec3 rpos; Math3::quat pq = Parent.Quaternion; Math3::quat pqi(pq); pqi.InvertUnitQuat(); pqi.Normalise(); Math3::quat::RotateVector3(rpos, pq, jv); Math3::vec3 npos(rpos + Parent.Pos); this->Translation = npos; Math3::quat nq = pq * jq; nq.Normalise(); this->Quaternion = nq; } And to achieve the image to the right, all I need to do is to change Math3::quat::RotateVector3(rpos, pq, jv); to Math3::quat::RotateVector3(rpos, pqi, jv);, why is that? And this is my skinning shader. SkinningShader.vert #version 330 core smooth out vec2 vVaryingTexCoords; smooth out vec3 vVaryingNormals; smooth out vec4 vWeightColor; uniform mat4 MV; uniform mat4 MVP; uniform mat4 Pallete[55]; uniform mat4 invBindPose[55]; layout(location = 0) in vec3 vPos; layout(location = 1) in vec2 vTexCoords; layout(location = 2) in vec3 vNormals; layout(location = 3) in int vSkeleton[4]; layout(location = 4) in vec3 vWeight; void main() { vec4 wpos = vec4(vPos, 1.0); vec4 norm = vec4(vNormals, 0.0); vec4 weight = vec4(vWeight, (1.0f-(vWeight[0] + vWeight[1] + vWeight[2]))); normalize(weight); mat4 BoneTransform; for(int i = 0; i < 4; i++) { if(vSkeleton[i] != -1) { if(i == 0) { // These are interchangable for some reason // BoneTransform = ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform = ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } else { // These are interchangable for some reason // BoneTransform += ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform += ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } } } wpos = BoneTransform * wpos; vWeightColor = weight; vVaryingTexCoords = vTexCoords; vVaryingNormals = normalize(vec3(vec4(vNormals, 0.0) * MV)); gl_Position = wpos * MVP; } The Pallete matrices are the matrices calculated using the above code (a rotation and translation matrix get created from the translation and quaternion). The invBindPose matrices are simply the inverted matrices created from the joints in the MD5mesh file. Update 1 I looked at GLM to compare the values I get with my own implementation. They turn out to be exactly the same. So now i'm checking if there's a problem with matrix creation... Update 2 Looked at GLM again to compare matrix creation using quaternions. Turns out that's not the problem either.

    Read the article

  • Using a portable USB monitor in Ubuntu 13.04 (AOC e1649Fwu - DisplayLink)

    Having access to a little bit of IT hardware extravaganza isn't that easy here in Mauritius for exactly two reasons - either it is simply not available or it is expensive like nowhere. Well, by chance I came across an advert by a local hardware supplier and their offer of the week caught my attention - a portable USB monitor. Sounds cool, and the specs are okay as well. It's completely driven via USB 2.0, has a light weight, the dimensions would fit into my laptop bag and the resolution of 1366 x 768 pixels is okay for a second screen. Long story, short ending: I called them and only got to understand that they are out of stock - how convenient! Well, as usual I left some contact details and got the regular 'We call you back' answer. Surprisingly, I didn't receive a phone call as promised and after starting to complain via social media networks they finally came back to me with new units available - and *drum-roll* still the same price tag as promoted (and free delivery on top as one of their employees lives in Flic en Flac). Guess, it was a no-brainer to get at least one unit to fool around with. In worst case it might end up as image frame on the shelf or so... The usual suspects... Ubuntu first! Of course, the packing mentions only Windows or Mac OS as supported operating systems and without hesitation at all, I hooked up the device on my main machine running on Ubuntu 13.04. Result: Blackout... Hm, actually not the situation I was looking for but okay can't be too difficult to get this piece of hardware up and running. Following the output of syslogd (or dmesg if you prefer) the device has been recognised successfully but we got stuck in the initialisation phase. Oct 12 08:17:23 iospc2 kernel: [69818.689137] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 12 08:17:23 iospc2 kernel: [69818.800306] usb 2-4: device descriptor read/64, error -32Oct 12 08:17:24 iospc2 kernel: [69819.043620] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 08:17:24 iospc2 kernel: [69819.043630] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 08:17:24 iospc2 kernel: [69819.043636] usb 2-4: Product: e1649FwuOct 12 08:17:24 iospc2 kernel: [69819.043642] usb 2-4: Manufacturer: DisplayLinkOct 12 08:17:24 iospc2 kernel: [69819.043647] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 08:17:24 iospc2 kernel: [69819.046073] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 08:17:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 08:17:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 12 08:17:30 iospc2 kernel: [69825.411220] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 12 08:17:30 iospc2 kernel: [69825.498778] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 12 08:17:30 iospc2 kernel: [69825.498786] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 12 08:17:30 iospc2 kernel: [69825.498909] usbcore: registered new interface driver udl The device has been recognised as USB device without any question and it is listed properly: # lsusb...Bus 002 Device 005: ID 17e9:4107 DisplayLink ... A quick and dirty research on the net gave me some hints towards the udlfb framebuffer device for USB DisplayLink devices. By default this kernel module is blacklisted $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udl#blacklist udlblacklist udlfb and it is recommended to load it manually. So, unloading the whole udl stack and giving udlfb a shot: Oct 12 08:22:31 iospc2 kernel: [70126.642809] usbcore: registered new interface driver udlfb But still no reaction on the external display which supposedly should have been on and green. Display okay? Test run on Windows Just to be on the safe side and to exclude any hardware related defects or whatsoever - you never know what happened during delivery. I moved the display to a new position on the opposite side of my laptop, installed the display drivers first in Windows Vista (I know, I know...) as recommended in the manual, and then finally hooked it up on that machine. Tada! Display has been recognised correctly and I have a proper choice between cloning and extending my desktop. Testing whether the display is working properly - using Windows Vista Okay, good to know that there is nothing wrong on the hardware side just software... Back to Ubuntu - Kernel too old Some more research on Google and various hits recommend that the original displaylink driver has been merged into the recent kernel development and one should manually upgrade the kernel image (and both header) packages for Ubuntu. At least kernel 3.9 or higher would be necessary, and so I went out to this URL: http://kernel.ubuntu.com/~kernel-ppa/mainline/ and I downloaded all the good stuff from the v3.9-raring directory. The installation itself is easy going via dpkg: $ sudo dpkg -i linux-image-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb$ sudo dpkg -i linux-headers-3.9.0-030900_3.9.0-030900.201304291257_all.deb$ sudo dpkg -i linux-headers-3.9.0-030900-generic_3.9.0-030900.201304291257_amd64.deb As with any kernel upgrades it is necessary to restart the system in order to use the new one. Said and done: $ uname -r3.9.0-030900-generic And now connecting the external display gives me the following output in /var/log/syslog: Oct 12 17:51:36 iospc2 kernel: [ 2314.984293] usb 2-4: new high-speed USB device number 6 using ehci-pciOct 12 17:51:36 iospc2 kernel: [ 2315.096257] usb 2-4: device descriptor read/64, error -32Oct 12 17:51:36 iospc2 kernel: [ 2315.337105] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 12 17:51:36 iospc2 kernel: [ 2315.337115] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 12 17:51:36 iospc2 kernel: [ 2315.337122] usb 2-4: Product: e1649FwuOct 12 17:51:36 iospc2 kernel: [ 2315.337127] usb 2-4: Manufacturer: DisplayLinkOct 12 17:51:36 iospc2 kernel: [ 2315.337132] usb 2-4: SerialNumber: FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338292] udlfb: DisplayLink e1649Fwu - serial #FJBD7HA000778Oct 12 17:51:36 iospc2 kernel: [ 2315.338299] udlfb: vid_17e9&pid_4107&rev_0129 driver's dlfb_data struct at ffff880117e59000Oct 12 17:51:36 iospc2 kernel: [ 2315.338303] udlfb: console enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338306] udlfb: fb_defio enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338309] udlfb: shadow enable=1Oct 12 17:51:36 iospc2 kernel: [ 2315.338468] udlfb: vendor descriptor length:17 data:17 5f 01 0015 05 00 01 03 00 04Oct 12 17:51:36 iospc2 kernel: [ 2315.338473] udlfb: DL chip limited to 1500000 pixel modesOct 12 17:51:36 iospc2 kernel: [ 2315.338565] udlfb: allocated 4 65024 byte urbsOct 12 17:51:36 iospc2 kernel: [ 2315.343592] hid-generic 0003:17E9:4107.0009: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 12 17:51:36 iospc2 mtp-probe: checking bus 2, device 6: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 12 17:51:36 iospc2 mtp-probe: bus: 2, device: 6 was not an MTP deviceOct 12 17:51:36 iospc2 kernel: [ 2315.426583] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.426589] udlfb: Reallocating framebuffer. Addresses will change!Oct 12 17:51:36 iospc2 kernel: [ 2315.428338] udlfb: 1366x768 @ 59 Hz valid modeOct 12 17:51:36 iospc2 kernel: [ 2315.428343] udlfb: set_par mode 1366x768Oct 12 17:51:36 iospc2 kernel: [ 2315.430620] udlfb: DisplayLink USB device /dev/fb1 attached. 1366x768 resolution. Using 4104K framebuffer memory Okay, that's looks more promising but still only blackout on the external screen... And yes, due to my previous modifications I swapped the blacklisted kernel modules: $ less /etc/modprobe.d/blacklist-framebuffer.conf | grep udlblacklist udl#blacklist udlfb Silly me! Okay, back to the original situation in which udl is allowed and udlfb blacklisted. Now, the logging looks similar to this and the screen shows those maroon-brown and azure-blue horizontal bars as described on other online resources. Oct 15 21:27:23 iospc2 kernel: [80934.308238] usb 2-4: new high-speed USB device number 5 using ehci-pciOct 15 21:27:23 iospc2 kernel: [80934.420244] usb 2-4: device descriptor read/64, error -32Oct 15 21:27:24 iospc2 kernel: [80934.660822] usb 2-4: New USB device found, idVendor=17e9, idProduct=4107Oct 15 21:27:24 iospc2 kernel: [80934.660832] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3Oct 15 21:27:24 iospc2 kernel: [80934.660838] usb 2-4: Product: e1649FwuOct 15 21:27:24 iospc2 kernel: [80934.660844] usb 2-4: Manufacturer: DisplayLinkOct 15 21:27:24 iospc2 kernel: [80934.660850] usb 2-4: SerialNumber: FJBD7HA000778Oct 15 21:27:24 iospc2 kernel: [80934.663391] hid-generic 0003:17E9:4107.0008: hiddev0,hidraw5: USB HID v1.10 Device [DisplayLink e1649Fwu] on usb-0000:00:1d.7-4/input1Oct 15 21:27:24 iospc2 mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-4"Oct 15 21:27:24 iospc2 mtp-probe: bus: 2, device: 5 was not an MTP deviceOct 15 21:27:25 iospc2 kernel: [80935.742407] [drm] vendor descriptor length:17 data:17 5f 01 00 15 05 00 01 03 00 04Oct 15 21:27:25 iospc2 kernel: [80935.834403] udl 2-4:1.0: fb1: udldrmfb frame buffer deviceOct 15 21:27:25 iospc2 kernel: [80935.834416] [drm] Initialized udl 0.0.1 20120220 on minor 1Oct 15 21:27:25 iospc2 kernel: [80935.836389] usbcore: registered new interface driver udlOct 15 21:27:25 iospc2 kernel: [80936.021458] [drm] write mode info 153 Next, it's time to enable the display for our needs... This can be done either via UI or console, just as you'd prefer it. Adding the external USB display under Linux isn't an issue after all... Settings Manager => Display Personally, I like the console. With the help of xrandr we get the screen identifier first $ xrandrScreen 0: minimum 320 x 200, current 3200 x 1080, maximum 32767 x 32767LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 331mm x 207mm...DVI-0 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm   1366x768       60.0*+ and then give it the usual shot with auto-configuration. Let the system decide what's best for your hardware... $ xrandr --output DVI-0 --off$ xrandr --output DVI-0 --auto And there we go... Cloned output of main display: New kernel, new display... The external USB display works out-of-the-box with a Linux kernel > 3.9.0. Despite of a good number of resources it is absolutely not necessary to create a Device or Screen section in one of Xorg.conf files. This information belongs to the past and is not valid on kernel 3.9 or higher. Same hardware but Windows 8 Of course, I wanted to know how the latest incarnation from Redmond would handle the new hardware... Flawless! Most interesting aspect here: I did not use the driver installation medium on purpose. And I was right... not too long afterwards a dialog with the EULA of DisplayLink appeared on the main screen. And after confirmation of same it took some more seconds and the external USB monitor was ready to rumble. Well, and not only that one... but see for yourself. This time Windows 8 was the easiest solution after all. Resume I can highly recommend this type of hardware to anyone asking me. Although, it's dimensions are 15.6" it is actually lighter than my Samsung Galaxy Tab 10.1 and it still fits into my laptop bag without any issues. From now on... no more single screen while developing software on the road!

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • A tale from a Stalker

    - by Peter Larsson
    Today I thought I should write something about a stalker I've got. Don't get me wrong, I have way more fans than stalkers, but this stalker is particular persistent towards me. It all started when I wrote about Relational Division with Sets late last year(http://weblogs.sqlteam.com/peterl/archive/2010/07/02/Proper-Relational-Division-With-Sets.aspx) and no matter what he tried, he didn't get a better performing query than me. But this I didn't click until later into this conversation. He must have saved himself for 9 months before posting to me again. Well... Some days ago I get an email from someone I thought i didn't know. Here is his first email Hi, I want a proper solution for achievement the result. The solution must be standard query, means no using as any native code like TOP clause, also the query should run in SQL Server 2000 (no CTE use). We have a table with consecutive keys (nbr) that is not exact sequence. We need bringing all values related with nearest key in the current key row. See the DDL: CREATE TABLE Nums(nbr INTEGER NOT NULL PRIMARY KEY, val INTEGER NOT NULL); INSERT INTO Nums(nbr, val) VALUES (1, 0),(5, 7),(9, 4); See the Result: pre_nbr     pre_val     nbr         val         nxt_nbr     nxt_val ----------- ----------- ----------- ----------- ----------- ----------- NULL        NULL        1           0           5           7 1           0           5           7           9           4 5           7           9           4           NULL        NULL The goal is suggesting most elegant solution. I would like see your best solution first, after that I will send my best (if not same with yours)   Notice there is no name, no please or nothing polite asking for my help. So, on the top of my head I sent him two solutions, following the rule "Work on SQL Server 2000 and only standard non-native code".     -- Peso 1 SELECT               pre_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.pre_nbr                              ) AS pre_val,                              d.nbr,                              d.val,                              d.nxt_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.nxt_nbr                              ) AS nxt_val FROM                (                                                           SELECT               (                                                                                                                     SELECT               MAX(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr < n.nbr                                                                                        ) AS pre_nbr,                                                                                        n.nbr,                                                                                        n.val,                                                                                        (                                                                                                                     SELECT               MIN(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr > n.nbr                                                                                        ) AS nxt_nbr                                                           FROM                dbo.Nums AS n                              ) AS d -- Peso 2 CREATE TABLE #Temp                                                         (                                                                                        ID INT IDENTITY(1, 1) PRIMARY KEY,                                                                                        nbr INT,                                                                                        val INT                                                           )   INSERT                                            #Temp                                                           (                                                                                        nbr,                                                                                        val                                                           ) SELECT                                            nbr,                                                           val FROM                                             dbo.Nums ORDER BY         nbr   SELECT                                            pre.nbr AS pre_nbr,                                                           pre.val AS pre_val,                                                           t.nbr,                                                           t.val,                                                           nxt.nbr AS nxt_nbr,                                                           nxt.val AS nxt_val FROM                                             #Temp AS pre RIGHT JOIN      #Temp AS t ON t.ID = pre.ID + 1 LEFT JOIN         #Temp AS nxt ON nxt.ID = t.ID + 1   DROP TABLE    #Temp Notice there are no indexes on #Temp table yet. And here is where the conversation derailed. First I got this response back Now my solutions: --My 1st Slt SELECT T2.*, T1.*, T3.*   FROM Nums AS T1        LEFT JOIN Nums AS T2          ON T2.nbr = (SELECT MAX(nbr)                         FROM Nums                        WHERE nbr < T1.nbr)        LEFT JOIN Nums AS T3          ON T3.nbr = (SELECT MIN(nbr)                         FROM Nums                        WHERE nbr > T1.nbr); --My 2nd Slt SELECT MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END) AS pre_nbr,        (SELECT val FROM Nums WHERE nbr = MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END)) AS pre_val,        N1.nbr AS cur_nbr, N1.val AS cur_val,        MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END) AS nxt_nbr,        (SELECT val FROM Nums WHERE nbr = MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END)) AS nxt_val   FROM Nums AS N1,        Nums AS N2  GROUP BY N1.nbr, N1.val;   /* My 1st Slt Table 'Nums'. Scan count 7, logical reads 14 My 2nd Slt Table 'Nums'. Scan count 4, logical reads 23 Peso 1 Table 'Nums'. Scan count 9, logical reads 28 Peso 2 Table '#Temp'. Scan count 0, logical reads 7 Table 'Nums'. Scan count 1, logical reads 2 Table '#Temp'. Scan count 3, logical reads 16 */  To this, I emailed him back asking for a scalability test What if you try with a Nums table with 100,000 rows? His response to that started to get nasty.  I have to say Peso 2 is not acceptable. As I said before the solution must be standard, ORDER BY is not part of standard SELECT. Try this without ORDER BY:  Truncate Table Nums INSERT INTO Nums (nbr, val) VALUES (1, 0),(9,4), (5, 7)  So now we have new rules. No ORDER BY because it's not standard SQL! Of course I asked him  Why do you have that idea? ORDER BY is not standard? To this, his replies went stranger and stranger Standard Select = Set-based (no any cursor) It’s free to know, just refer to Advanced SQL Programming by Celko or mail to him if you accept comments from him. What the stalker probably doesn't know, is that I and Mr Celko occasionally are involved in some conversation and thus we exchange emails. I don't know if this reference to Mr Celko was made to intimidate me either. So I answered him, still polite, this What do you mean? The SELECT itself has a ”cursor under the hood”. Now the stalker gets rude  But however I mean the solution must no containing any order by, top... No problem, I do not like Peso 2, it’s very non-intelligent and elementary. Yes, Peso 2 is elementary but most performing queries are... And now is the time where I started to feel the stalker really wanted to achieve something else, so I wrote to him So what is your goal? Have a query that performs well, or a query that is super-portable? My Peso 2 outperforms any of your code with a factor of 100 when using more than 100,000 rows. While I awaited his answer, I posted him this query Ok, here is another one -- Peso 3 SELECT             MAX(CASE WHEN d = 1 THEN nbr ELSE NULL END) AS pre_nbr,                    MAX(CASE WHEN d = 1 THEN val ELSE NULL END) AS pre_val,                    MAX(CASE WHEN d = 0 THEN nbr ELSE NULL END) AS nbr,                    MAX(CASE WHEN d = 0 THEN val ELSE NULL END) AS val,                    MAX(CASE WHEN d = -1 THEN nbr ELSE NULL END) AS nxt_nbr,                    MAX(CASE WHEN d = -1 THEN val ELSE NULL END) AS nxt_val FROM               (                              SELECT    nbr,                                        val,                                        ROW_NUMBER() OVER (ORDER BY nbr) AS SeqID                              FROM      dbo.Nums                    ) AS s CROSS JOIN         (                              VALUES    (-1),                                        (0),                                        (1)                    ) AS x(d) GROUP BY           SeqID + x.d HAVING             COUNT(*) > 1 And here is the stats Table 'Nums'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. It beats the hell out of your queries…. Now I finally got a response from my stalker and now I also clicked who he was. This is his reponse Why you post my original method with a bit change under you name? I do not like it. See: http://www.sqlservercentral.com/Forums/Topic468501-362-14.aspx ;WITH C AS ( SELECT seq_nbr, k,        DENSE_RANK() OVER(ORDER BY seq_nbr ASC) + k AS grp_fct   FROM [Sample]         CROSS JOIN         (VALUES (-1), (0), (1)         ) AS D(k) ) SELECT MIN(seq_nbr) AS pre_value,        MAX(CASE WHEN k = 0 THEN seq_nbr END) AS current_value,        MAX(seq_nbr) AS next_value   FROM C GROUP BY grp_fct HAVING min(seq_nbr) < max(seq_nbr); These posts: Posted Tuesday, April 12, 2011 10:04 AM Posted Tuesday, April 12, 2011 1:22 PM Why post a solution where will not work in SQL Server 2000? Wait a minute! His own solution is using both a CTE and a ranking function so his query will not work on SQL Server 2000! Bummer... The reference to "Me not like" are my exact words in a previous topic on SQLTeam.com and when I remembered the phrasing, I also knew who he was. See this topic http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 where he writes a query and posts it under my name, as if I wrote it. So I answered him this (less polite). Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea? Besides, “M S solution” doesn’t work.   This is the result I get pre_value        current_value                             next_value 1                           1                           5 1                           5                           9 5                           9                           9   And I did nothing like you did here, where you posted a solution which you “thought” I should write http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? After a few hours I get this email back. I don't fully understand it, but it's probably a language barrier. >>Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea?<< You right, but do not think you are the first creator of this.   >>Besides, “M S Solution” doesn’t work. This is the result I get <<   Why you get so unimportant mistake? See this post to correct it: Posted 4/12/2011 8:22:23 PM >> So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. <<  Again, why you get some unimportant incompatibility? You offer that solution for current goals not me  >> All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? <<  No, I only wanted to know who you will solve it. Now I know you do not have a special solution. No problem. No problem for me either. So I just answered him I am not the first, and you are not the first to come up with this idea. So what is your problem? I am pretty sure other people have come up with the same idea before us. I used this technique all the way back to 2007, see http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=93911 Let's see if he returns...  He did! >> So what is your problem? << Nothing Thanks for all replies; maybe we have some competitions in future, maybe. Also I like you but you do not attend it. Your behavior with me is not friendly. Not any meeting… Regards //Peso

    Read the article

  • MCM Lab exam this week

    - by Rob Farley
    In two days I’ll’ve finished the MCM Lab exam, 88-971. If you do an internet search for 88-971, it’ll tell you the answer is –883. Obviously. It’ll also give you a link to the actual exam page, which is useful too, once you’ve finished being distracted by the calculator instead of going to the thing you’re actually looking for. (Do people actually search the internet for the results of mathematical questions? Really?) The list of Skills Measured for this exam is quite short, but can essentially be broken down into one word “Anything”. The Preparation Materials section is even better. Classroom Training – none available. Microsoft E-Learning – none available. Microsoft Press Books – none available. Practice Tests – none available. But there are links to Readiness Videos and a page which has no resources listed, but tells you a list of people who have already qualified. Three in Australia who have MCM SQL Server 2008 so far. The list doesn’t include some of the latest batch, such as Jason Strate or Tom LaRock. I’ve used SQL Server for almost 15 years. During that time I’ve been awarded SQL Server MVP seven times, but the MVP award doesn’t actually mean all that much when considering this particular certification. I know lots of MVPs who have tried this particular exam and failed – including Jason and Tom. Right now, I have no idea whether I’ll pass or not. People tell me I’ll pass no problem, but I honestly have no idea. There’s something about that “Anything” aspect that worries me. I keep looking at the list of things in the Readiness Videos, and think to myself “I’m comfortable with Resource Governor (or whatever) – that should be fine.” Except that then I feel like I maybe don’t know all the different things that can go wrong with Resource Governor (or whatever), and I wonder what kind of situations I’ll be faced with. And then I find myself looking through the stuff that’s explained in the videos, and wondering what kinds of things I should know that I don’t, and then I get amazingly bored and frustrated (after all, I tell people that these exams aren’t supposed to be studied for – you’ve been studying for the last 15 years, right?), and I figure “What’s the worst that can happen? A fail?” I’m told that the exam provides a list of scenarios (maybe 14 of them?) and you have 5.5 hours to complete them. When I say “complete”, I mean complete – you don’t get to leave them unfinished, that’ll get you ‘nil points’ for that scenario. Apparently no-one gets to complete all of them. Now, I’m a consultant. I get called on to fix the problems that people have on their SQL boxes. Sometimes this involves fixing corruption. Sometimes it’s figuring out some performance problem. Sometimes it’s as straight forward as getting past a full transaction log; sometimes it’s as tricky as recovering a database that has lost its metadata, without backups. Most situations aren’t a problem, but I also have the confidence of being able to do internet searches to verify my maths (in case I forget it’s –883). In the exam, I’ll have maybe twenty minutes per scenario (but if I need longer, I’ll have to take longer – no point in stopping half way if it takes more than twenty minutes, unless I don’t see an end coming up), so I’ll have time constraints too. And of course, I won’t have any of my usual tools. I can’t take scripts in, I can’t take staff members. Hopefully I can use the coffee machine that will be in the room. I figure it’s going to feel like one of those days when I’ve gone into a client site, and found that the problems are way worse than I expected, and that the site is down, with people standing over me needing me to get things right first time... ...so it should be fine, I’ve done that before. :) If I do fail, it won’t make me any less of a consultant. It won’t make me any less able to help all of my clients (including you if you get in touch – hehe), it’ll just mean that the particular problem might’ve taken me more than the twenty minutes that the exam gave me. @rob_farley PS: Apparently the done thing is to NOT advertise that you’re sitting the exam at a particular time, only that you’re expecting to take it at some point in the future. I think it’s akin to the idea of not telling people you’re pregnant for the first few months – it’s just in case the worst happens. Personally, I’m happy to tell you all that I’m going to take this exam the day after tomorrow (which is the 19th in the US, the 20th here). If I end up failing, you can all commiserate and tell me that I’m not actually as unqualified as I feel.

    Read the article

  • Branching and Merging Improvements in TFS2010

    - by jehan
    Introducing the concept of “first class branches” is a significant improvement as part of the 2010 release with respect to version control.  Not only does it help to distinguish between folders and branches, but it enables branch visualizations. Let us see improvements in detail. ·         In TFS2008, you don’t know which of the folders are Branches: All folders looks the same, all have the folder icon. Now, In TFS 2010 there is a new icon that shows which of the folder is a Branch.       ·      There is no visual means to manage branches in TFS2008:   You dont have any means to identify which branches are related and the relation type. Now, In TFS 2010 you have visual tools to see the Branches Hierarchy. In order to see a Branch Hierarchy just Right Click the Branch and choose: Branching and Merging –> View Hierarchy     ·         In TFS2008, there is no option to track changes path between the Branches:  If you have made a merge in a Branch you can’t track from which Branch this Merge came from. Now, you have the tools that shows the path of change between the Branches, you can also see where change was added on a timeline.  In order to track a change do the following: Step1: Right click the Branch and click View History   Step 2: Choose a changeset to track and click the “Track Changeset” button.     Step 3: Choose the branches that will be in the view and click “Visualize”. In above visual, you can see that Changesets 108,109,110 and 119 where merged from Main to Release1.0 Branch and then “Release_1.0” Branched to “Dev1.0. Step4: You can also see the Merges on a Timeline by clicking on the “Timeline Tracking” button.   Creating New Branches: In TFS 2010, the creation of branches has been streamlined a bit from the process in 2008.  In 2008, creating a new branch was like every other action in the system – changes were pended on the client, and then checked in to the server. Because of this creating new branch in TFS2008 was time-consuming process.  In TFS2010, the step where changes are pended has been bypassed and now performing the branch creation is entirely on the server.  With this approach, the round trip time for downloading a copy of each file on the branch and then uploading each file again has been eliminated.  Note: In TFS2010, the new branch will be created and committed as a single operation on the server. Pending changes will not be created, it doesn’t require a check-in as it will be carried out as a single operation and it’s not possible to cancel.     Manage Branch Permissions: The properties view for branches is also different than that of ordinary folders or file, containing some metadata for the branch, relationship information, and permissions for the branch. In TFS2008, the users who have checkout and Check-in permissions can create a branch. But, In TFS2010 you can control the permissions for Branches using Manage Branch permissions.   Reparent option in TFS2010: In TFS2008, if we have two branches which don’t have parent-child relation and we want perform merge between these two branches then we have to perform baseless merge using tf.exe command line. I have two branches Release_1.0 and Dev1.0_F2 which don’t have any relation between them, that’s why when I click on merge option in Release_1.0, in Target Branch it’s not showing Dev1.0_F2 branch to perform the merges.     Let us see what can we do for this in TFS2010, first perform a TFS baseless merge to establish a relationship between the parent branch and the child branches.  It will only merge the folder, not its contents. TFS baseless merges are performed via the command line using VS2010 command prompt and do the following:   tf merge /baseless <ParentBranch> <childBranch> Check in your pending changes. It will create the link between the branches but the relationships are still not completed.  Now, select the child branch in Source Control Explorer and from the File menu choose Source Control –> Branching and Merging –> Reparent.      In the dialog box,  choose the appropriate branch as the new parent.   Click Reparent and then go to parent branch and click merge. Now, will see that in Target Branch option Dev1.0_F2 branch is added.         Converting Folders to Branches and Branches to Folders: You can convert any Folder as Branch from Context Menu by performing right click on the folderàBranching and MergingàConvert to Branch. In similar way, you can convert the Branches to Folder using Convert to Folder option available in File Menu (FileàSource ControlàBranching and MergingàConvert to Branch). This option is not available in context menu.

    Read the article

  • Manage Your Amazon S3 Account with CloudBerry Explorer

    - by Mysticgeek
    If you have an Amazon S3 account you’re using to backup your data, you might want an easy way to manage it. CloudBerry Explorer is a free app that runs on your desktop an provides an easy way to manage your S3 account. Installation and Setup Just download and install the application with the defaults. When the application launches you’ll be prompted to enter in your username and email to get a registration key. Or you can continue on by clicking Register later. Now you will want to set up your Amazon S3 account. Click on File \ Amazon S3 Accounts. Double-click on the New Account icon.   Next enter in your Amazon account Access and Secret keys, select SSL if you want, then click the Test Connection button. Provided everything was entered correctly, you’ll see the Connection Success screen, just close out of it. Browse and Manage files Once you have your account setup through the Explorer, you can start viewing and managing your files on S3. The left pane shows your S3 buckets and stored files, while the right side shows your local computer. This allows you to manage your files in your Amazon S3 buckets directly from your desktop! It’s very easy to use, and you can drag and drop files from your computer to the S3 account or vice versa. There is also the ability to transfer files between Amazon S3 accounts from within the explorer. Go into Tools and Content Types and you can control the file types by adding, removing, or editing them. If you end up messing something up along the lines, you can always select Reset to defaults and everything will be back to normal. There is a multiple tabbed view so you can easily keep track of your different accounts and local machine. It allows the ability to create new storage buckets directly in the Explorer. Or you can delete buckets as well… Different actions can be accessed from the toolbars or by right-clicking and selecting from the context menu. Here we see a cool option that lets you move your data inside Amazon S3. It is faster and doesn’t cost money by moving the files to your computer first, then to another account. However, if you want data moved to your local machine first, you have that option as well.   Not all features are available in the free version, and if it’s not, you’ll be prompted to purchase a license for the Pro version. We will have a comprehensive review of the Pro version in the near future.    If you ever need help with CloudBerry Explorer, go to Tools \ Diagnostics. It will run a quick diagnostics check and you can send the information to the CloudBerry team for assistance. Delete Files from Amazon S3 To delete a file from you Amazon S3 account, simply highlight the files or folder you want to get rid of then click Delete on the toolbar. You can also right-click the file and select Delete from the Context Menu. Click Yes to the confirmation dialog box… Then you can watch the progress as your files are deleted in the bottom section of the explorer. Conclusion CloudBerry Explorer free version has several neat features that will allow you easy and basic control over you Amazon S3 account. The free version may be enough for basic users, but power users will want to upgrade to the pro version, as it includes a lot more features. Using the free version allows you to get a feel for what CloudBerry Explorer has to offer, and is a good starting point. Keep in mind that Amazon S3 is introducing Reduced Redundancy Storage which will lower the price of data stored. The price drops from $0.15 per GB to only $0.10 per GB. If you’re a Windows Home Server user, check out our review of CloudBerry Online Backup 1.5 for WHS. Download CloudBerry Explorer Free for Amazon S3 Similar Articles Productive Geek Tips CloudBerry Online Backup 1.5 for Windows Home ServerReopen Closed Tabs in Internet ExplorerPreview and Purchase Ebooks with Kindle for PCTroubleshoot and Manage Addons in Internet Explorer 8Beginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Customizing UPK outputs (Part 2 - Player)

    - by [email protected]
    There are a few things that can be done to give the Player output a personalized look to match your corporate branding. In my previous post, I talked about changing the logo. In addition to the logo, you can change the graphic in the heading, button colors, border colors and many other items. Prior to making any customizations, I strongly recommend making a copy of the existing Player style. This will give you a backup in case things go wrong. I'd also recommend that you create your own brand. This way, when you install the newest updates from us, your brand will remain intact. Creating your own brand is pretty easy. Make sure you have modify permissions on the publishing styles directory, if you are using a multi-user installation. Under the Publishing/Styles folder, create a new folder with your company name. Copy all the publishing styles from the UPK folder to your newly created folder. Now, when you go through the Publishing wizard, you will have two categories to choose from: the UPK category or your custom category. Now, for updating the Player output. First, the graphic that appears on the right hand side of the Player. If you're using a multi-user installation, check out the player style from your custom brand. Open the player style. Open the img folder. The file named "banner_image.png" represents the graphic that appears on the right hand side of the player. It is currently sized at 425 x 54. Try to keep your graphic about the same size. Rename your graphic file to be "banner_image.png", and drag it into the img folder. Save the package. Check in the package if you are in a multi-user installation. You've just updated the banner heading! Next, let's work on updating some of the other colors in the player. All the customizable areas are located in the skin.css file which is in the root of the Player style. Many of our customers update the colors to match their own theme. You don't have to be a programmer to make these changes, honest. :) To change the colors in the player: Make a copy of the original skin.css file. (This is to make sure you have a working version to revert to, in case something goes wrong.) Open the skin.css file from the Player package. You can edit it using Notepad. Make the desired changes. Save the file. Save the package. Publish to view your new changes. When you open the skin.css, you will see groupings like this: .headerDivbar { height: 21px; background-color: #CDE2FD; } Change the value of the background-color to the color of your choice. Note that you cannot use "red" as a color, but rather you should enter the hexadecimal color code. If you don't know the color code, search the web for "hexadecimal colors" and you'll find many sites to provide the information. Here are a few of the variables that you can update. Heading: .headerDivbar -this changes the color of the banner that appears under the graphic Button colors: .navCellOn - changes the color of the mode buttons when your mouse is hovering on them. .navCellOff - changes the color of the mode buttons when the mouse is not over them Lines: .thorizontal - this is the color of the horizontal lines surrounding the outline .tvertical - this is the color of the vertical lines on the left and right margin in the outline. .tsep - this is the color of the line that separates the outline from the content area Search frame: .tocSearchColor - this is the color of the search area .tocFrameText - this is the background color of the TOC tree. Hint: If you want to try out the changes prior to updating the style, you can update the skin.css in some content you've already published for the player (it's located in the css folder of the player package). This way, you can immediately see the changes without going through publishing. Once you're happy with the changes, update the skin.css in player style. Want to customize more? Refer to the "Customizing the Player" section of the Content Development manual for more details on all the options in the skin.css that can be changed, and pictures of what each variable controls. I'd love to see how you've customized the player for your corporate needs. Also, if there are other areas of the player you'd like to modify but have not been able to, let us know. Feel free to share your thoughts in the comments. --Maria Cozzolino, Manager of Requirements & UI Design for UPK

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • Creating a thematic map

    - by jsharma
    This post describes how to create a simple thematic map, just a state population layer, with no underlying map tile layer. The map shows states color-coded by total population. The map is interactive with info-windows and can be panned and zoomed. The sample code demonstrates the following: Displaying an interactive vector layer with no background map tile layer (i.e. purpose and use of the Universe object) Using a dynamic (i.e. defined via the javascript client API) color bucket style Dynamically changing a layer's rendering style Specifying which attribute value to use in determining the bucket, and hence style, for a feature (FoI) The result is shown in the screenshot below. The states layer was defined, and stored in the user_sdo_themes view of the mvdemo schema, using MapBuilder. The underlying table is defined as SQL> desc states_32775  Name                                      Null?    Type ----------------------------------------- -------- ----------------------------  STATE                                              VARCHAR2(26)  STATE_ABRV                                         VARCHAR2(2) FIPSST                                             VARCHAR2(2) TOTPOP                                             NUMBER PCTSMPLD                                           NUMBER LANDSQMI                                           NUMBER POPPSQMI                                           NUMBER ... MEDHHINC NUMBER AVGHHINC NUMBER GEOM32775 MDSYS.SDO_GEOMETRY We'll use the TOTPOP column value in the advanced (color bucket) style for rendering the states layers. The predefined theme (US_STATES_BI) is defined as follows. SQL> select styling_rules from user_sdo_themes where name='US_STATES_BI'; STYLING_RULES -------------------------------------------------------------------------------- <?xml version="1.0" standalone="yes"?> <styling_rules highlight_style="C.CB_QUAL_8_CLASS_DARK2_1"> <hidden_info> <field column="STATE" name="Name"/> <field column="POPPSQMI" name="POPPSQMI"/> <field column="TOTPOP" name="TOTPOP"/> </hidden_info> <rule column="TOTPOP"> <features style="states_totpop"> </features> <label column="STATE_ABRV" style="T.BLUE_SERIF_10"> 1 </label> </rule> </styling_rules> SQL> The theme definition specifies that the state, poppsqmi, totpop, state_abrv, and geom columns will be queried from the states_32775 table. The state_abrv value will be used to label the state while the totpop value will be used to determine the color-fill from those defined in the states_totpop advanced style. The states_totpop style, which we will not use in our demo, is defined as shown below. SQL> select definition from user_sdo_styles where name='STATES_TOTPOP'; DEFINITION -------------------------------------------------------------------------------- <?xml version="1.0" ?> <AdvancedStyle> <BucketStyle> <Buckets default_style="C.S02_COUNTRY_AREA"> <RangedBucket seq="0" label="10K - 5M" low="10000" high="5000000" style="C.SEQ6_01" /> <RangedBucket seq="1" label="5M - 12M" low="5000001" high="1.2E7" style="C.SEQ6_02" /> <RangedBucket seq="2" label="12M - 20M" low="1.2000001E7" high="2.0E7" style="C.SEQ6_04" /> <RangedBucket seq="3" label="&gt; 20M" low="2.0000001E7" high="5.0E7" style="C.SEQ6_05" /> </Buckets> </BucketStyle> </AdvancedStyle> SQL> The demo defines additional advanced styles via the OM.style object and methods and uses those instead when rendering the states layer.   Now let's look at relevant snippets of code that defines the map extent and zoom levels (i.e. the OM.universe),  loads the states predefined vector layer (OM.layer), and sets up the advanced (color bucket) style. Defining the map extent and zoom levels. function initMap() {   //alert("Initialize map view");     // define the map extent and number of zoom levels.   // The Universe object is similar to the map tile layer configuration   // It defines the map extent, number of zoom levels, and spatial reference system   // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined   // The Universe must be defined when there is no underlying map tile layer.   // When there is a map tile layer then that defines the map extent, srid, and zoom levels.      var uni= new OM.universe.Universe(     {         srid : 32775,         bounds : new OM.geometry.Rectangle(                         -3280000, 170000, 2300000, 3200000, 32775),         numberOfZoomLevels: 8     }); The srid specifies the spatial reference system which is Equal-Area Projection (United States). SQL> select cs_name from cs_srs where srid=32775 ; CS_NAME --------------------------------------------------- Equal-Area Projection (United States) The bounds defines the map extent. It is a Rectangle defined using the lower-left and upper-right coordinates and srid. Loading and displaying the states layer This is done in the states() function. The full code is at the end of this post, however here's the snippet which defines the states VectorLayer.     // States is a predefined layer in user_sdo_themes     var  layer2 = new OM.layer.VectorLayer("vLayer2",     {         def:         {             type:OM.layer.VectorLayer.TYPE_PREDEFINED,             dataSource:"mvdemo",             theme:"us_states_bi",             url: baseURL,             loadOnDemand: false         },         boundingTheme:true      }); The first parameter is a layer name, the second is an object literal for a layer config. The config object has two attributes: the first is the layer definition, the second specifies whether the layer is a bounding one (i.e. used to determine the current map zoom and center such that the whole layer is displayed within the map window) or not. The layer config has the following attributes: type - specifies whether is a predefined one, a defined via a SQL query (JDBC), or in a json-format file (DATAPACK) theme - is the predefined theme's name url - is the location of the mapviewer server loadOnDemand - specifies whether to load all the features or just those that lie within the current map window and load additional ones as needed on a pan or zoom The code snippet below dynamically defines an advanced style and then uses it, instead of the 'states_totpop' style, when rendering the states layer. // override predefined rendering style with programmatic one    var theRenderingStyle =      createBucketColorStyle('YlBr5', colorSeries, 'States5', true);   // specify which attribute is used in determining the bucket (i.e. color) to use for the state   // It can be an array because the style could be a chart type (pie/bar)   // which requires multiple attribute columns     // Use the STATE.TOTPOP column (aka attribute) value here    layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); The style itself is defined in the createBucketColorStyle() function. Dynamically defining an advanced style The advanced style used here is a bucket color style, i.e. a color style is associated with each bucket. So first we define the colors and then the buckets.     numClasses = colorSeries[colorName].classes;    // create Color Styles    for (var i=0; i < numClasses; i++)    {         theStyles[i] = new OM.style.Color(                      {fill: colorSeries[colorName].fill[i],                        stroke:colorSeries[colorName].stroke[i],                       strokeOpacity: useGradient? 0.25 : 1                      });    }; numClasses is the number of buckets. The colorSeries array contains the color fill and stroke definitions and is: var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": {   classes:3,                  fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8],                  stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6]   }, "YlBl5": {   classes:5,                  fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494],                  stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85]   }, //multi-hue color scheme #11 YlBr.  "YlBr3": {classes:3,                  fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E],                  stroke:[0xE6DEA9, 0xE5B047, 0xC5360D]   }, "YlBr5": {classes:5,                  fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404],                  stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04]     }, etc. Next we create the bucket style.    bucketStyleDef = {       numClasses : colorSeries[colorName].classes, //      classification: 'custom',  //since we are supplying all the buckets //      buckets: theBuckets,       classification: 'logarithmic',  // use a logarithmic scale       styles: theStyles,       gradient:  useGradient? 'linear' : 'off' //      gradient:  useGradient? 'radial' : 'off'     };    theBucketStyle = new OM.style.BucketStyle(bucketStyleDef);    return theBucketStyle; A BucketStyle constructor takes a style definition as input. The style definition specifies the number of buckets (numClasses), a classification scheme (which can be equal-ranged, logarithmic scale, or custom), the styles for each bucket, whether to use a gradient effect, and optionally the buckets (required when using a custom classification scheme). The full source for the demo <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Oracle Maps V2 Thematic Map Demo</title> <script src="http://localhost:8080/mapviewer/jslib/v2/oraclemapsv2.js" type="text/javascript"> </script> <script type="text/javascript"> //var $j = jQuery.noConflict(); var baseURL="http://localhost:8080/mapviewer"; // location of mapviewer OM.gv.proxyEnabled =false; // no mvproxy needed OM.gv.setResourcePath(baseURL+"/jslib/v2/images/"); // location of resources for UI elements like nav panel buttons var map = null; // the client mapviewer object var statesLayer = null, stateCountyLayer = null; // The vector layers for states and counties in a state var layerName="States"; // initial map center and zoom var mapCenterLon = -20000; var mapCenterLat = 1750000; var mapZoom = 2; var mpoint = new OM.geometry.Point(mapCenterLon,mapCenterLat,32775); var currentPalette = null, currentStyle=null; // set an onchange listener for the color palette select list // initialize the map // load and display the states layer $(document).ready( function() { $("#demo-htmlselect").change(function() { var theColorScheme = $(this).val(); useSelectedColorScheme(theColorScheme); }); initMap(); states(); } ); /** * color series from ColorBrewer site (http://colorbrewer2.org/). */ var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": { classes:3, fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8], stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6] }, "YlBl5": { classes:5, fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494], stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85] }, //multi-hue color scheme #11 YlBr. "YlBr3": {classes:3, fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E], stroke:[0xE6DEA9, 0xE5B047, 0xC5360D] }, "YlBr5": {classes:5, fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404], stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04] }, // single-hue color schemes (blues, greens, greys, oranges, reds, purples) "Purples5": {classes:5, fill:[0xf2f0f7, 0xcbc9e2, 0x9e9ac8, 0x756bb1, 0x54278f], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Blues5": {classes:5, fill:[0xEFF3FF, 0xbdd7e7, 0x68aed6, 0x3182bd, 0x18519C], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greens5": {classes:5, fill:[0xedf8e9, 0xbae4b3, 0x74c476, 0x31a354, 0x116d2c], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greys5": {classes:5, fill:[0xf7f7f7, 0xcccccc, 0x969696, 0x636363, 0x454545], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Oranges5": {classes:5, fill:[0xfeedde, 0xfdb385, 0xfd8d3c, 0xe6550d, 0xa63603], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Reds5": {classes:5, fill:[0xfee5d9, 0xfcae91, 0xfb6a4a, 0xde2d26, 0xa50f15], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] } }; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) { var theBucketStyle; var bucketStyleDef; var theStyles = []; var theColors = []; var aBucket, aStyle, aColor, aRange; var numClasses ; numClasses = colorSeries[colorName].classes; // create Color Styles for (var i=0; i < numClasses; i++) { theStyles[i] = new OM.style.Color( {fill: colorSeries[colorName].fill[i], stroke:colorSeries[colorName].stroke[i], strokeOpacity: useGradient? 0.25 : 1 }); }; bucketStyleDef = { numClasses : colorSeries[colorName].classes, // classification: 'custom', //since we are supplying all the buckets // buckets: theBuckets, classification: 'logarithmic', // use a logarithmic scale styles: theStyles, gradient: useGradient? 'linear' : 'off' // gradient: useGradient? 'radial' : 'off' }; theBucketStyle = new OM.style.BucketStyle(bucketStyleDef); return theBucketStyle; } function initMap() { //alert("Initialize map view"); // define the map extent and number of zoom levels. // The Universe object is similar to the map tile layer configuration // It defines the map extent, number of zoom levels, and spatial reference system // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined // The Universe must be defined when there is no underlying map tile layer. // When there is a map tile layer then that defines the map extent, srid, and zoom levels. var uni= new OM.universe.Universe( { srid : 32775, bounds : new OM.geometry.Rectangle( -3280000, 170000, 2300000, 3200000, 32775), numberOfZoomLevels: 8 }); map = new OM.Map( document.getElementById('map'), { mapviewerURL: baseURL, universe:uni }) ; var navigationPanelBar = new OM.control.NavigationPanelBar(); map.addMapDecoration(navigationPanelBar); } // end initMap function states() { //alert("Load and display states"); layerName = "States"; if(statesLayer) { // states were already visible but the style may have changed // so set the style to the currently selected one var theData = $('#demo-htmlselect').val(); setStyle(theData); } else { // States is a predefined layer in user_sdo_themes var layer2 = new OM.layer.VectorLayer("vLayer2", { def: { type:OM.layer.VectorLayer.TYPE_PREDEFINED, dataSource:"mvdemo", theme:"us_states_bi", url: baseURL, loadOnDemand: false }, boundingTheme:true }); // add drop shadow effect and hover style var shadowFilter = new OM.visualfilter.DropShadow({opacity:0.5, color:"#000000", offset:6, radius:10}); var hoverStyle = new OM.style.Color( {stroke:"#838383", strokeThickness:2}); layer2.setHoverStyle(hoverStyle); layer2.setHoverVisualFilter(shadowFilter); layer2.enableFeatureHover(true); layer2.enableFeatureSelection(false); layer2.setLabelsVisible(true); // override predefined rendering style with programmatic one var theRenderingStyle = createBucketColorStyle('YlBr5', colorSeries, 'States5', true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state // It can be an array because the style could be a chart type (pie/bar) // which requires multiple attribute columns // Use the STATE.TOTPOP column (aka attribute) value here layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); currentPalette = "YlBr5"; var stLayerIdx = map.addLayer(layer2); //alert('State Layer Idx = ' + stLayerIdx); map.setMapCenter(mpoint); map.setMapZoomLevel(mapZoom) ; // display the map map.init() ; statesLayer=layer2; // add rt-click event listener to show counties for the state layer2.addListener(OM.event.MouseEvent.MOUSE_RIGHT_CLICK,stateRtClick); } // end if } // end states function setStyle(styleName) { // alert("Selected Style = " + styleName); // there may be a counties layer also displayed. // that wll have different bucket ranges so create // one style for states and one for counties var newRenderingStyle = null; if (layerName === "States") { if(/3/.test(styleName)) { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States3', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties3', false); } else { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States5', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties5', false); } statesLayer.setRenderingStyle(newRenderingStyle, ["TOTPOP"]); if (stateCountyLayer) stateCountyLayer.setRenderingStyle(currentStyle, ["TOTPOP"]); } } // end setStyle function stateRtClick(evt){ var foi = evt.feature; //alert('Rt-Click on State: ' + foi.attributes['_label_'] + // ' with pop ' + foi.attributes['TOTPOP']); // display another layer with counties info // layer may change on each rt-click so create and add each time. var countyByState = null ; // the _label_ attribute of a feature in this case is the state abbreviation // we will use that to query and get the counties for a state var sqlText = "select totpop,geom32775 from counties_32775_moved where state_abrv="+ "'"+foi.getAttributeValue('_label_')+"'"; // alert(sqlText); if (currentStyle === null) currentStyle = createBucketColorStyle('YlBr5', colorSeries, 'Counties5', false); /* try a simple style instead new OM.style.ColorStyle( { stroke: "#B8F4FF", fill: "#18E5F4", fillOpacity:0 } ); */ // remove existing layer if any if(stateCountyLayer) map.removeLayer(stateCountyLayer); countyByState = new OM.layer.VectorLayer("stCountyLayer", {def:{type:OM.layer.VectorLayer.TYPE_JDBC, dataSource:"mvdemo", sql:sqlText, url:baseURL}}); // url:baseURL}, // renderingStyle:currentStyle}); countyByState.setVisible(true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state countyByState.setRenderingStyle(currentStyle, ["TOTPOP"]); var ctLayerIdx = map.addLayer(countyByState); // alert('County Layer Idx = ' + ctLayerIdx); //map.addLayer(countyByState); stateCountyLayer = countyByState; } // end stateRtClick function useSelectedColorScheme(theColorScheme) { if(map) { // code to update renderStyle goes here //alert('will try to change render style'); setStyle(theColorScheme); } else { // do nothing } } </script> </head> <body bgcolor="#b4c5cc" style="height:100%;font-family:Arial,Helvetica,Verdana"> <h3 align="center">State population thematic map </h3> <div id="demo" style="position:absolute; left:68%; top:44px; width:28%; height:100%"> <HR/> <p/> Choose Color Scheme: <select id="demo-htmlselect"> <option value="YlBl3"> YellowBlue3</option> <option value="YlBr3"> YellowBrown3</option> <option value="YlBl5"> YellowBlue5</option> <option value="YlBr5" selected="selected"> YellowBrown5</option> <option value="Blues5"> Blues</option> <option value="Greens5"> Greens</option> <option value="Greys5"> Greys</option> <option value="Oranges5"> Oranges</option> <option value="Purples5"> Purples</option> <option value="Reds5"> Reds</option> </select> <p/> </div> <div id="map" style="position:absolute; left:10px; top:50px; width:65%; height:75%; background-color:#778f99"></div> <div style="position:absolute;top:85%; left:10px;width:98%" class="noprint"> <HR/> <p> Note: This demo uses HTML5 Canvas and requires IE9+, Firefox 10+, or Chrome. No map will show up in IE8 or earlier. </p> </div> </body> </html>

    Read the article

  • Walmart and Fusion Apps

    - by ultan o'broin
    Photograph: Misha Vaughan I attended Fusion Apps (yes, I know I am supposed to say "Oracle Fusion Applications", but stuffy old style guides are a turn-off in interwebs conversations) User Experience Advocate (FXA) training in Long Beach, California last week; a suitable location as ODTUG KSCOPE 11 was kicking off and key players were in the area. As a member of Oracle's Apps-UX team I know the Fusion Apps messaging, natch, and done some other Fusion Apps go-to-market content work too. For the messaging details themselves, see Lonneke Dikmans (@lonnekedikmans) great blog, by the way. However, I wanted some 'formal' training combined with the opportunity to meet and learn from people already out there delivering those messages. The idea in me reaching out to Misha Vaughan, Apps-UX FXA maven, to get me onto this training was that in addition to my UX knowledge, I could leverage my location in EMEA and hit up customer events more quickly and easily. Those local user groups do like to hear the voice of locals too you know (so I need to work on that mid-Atlantic accent). I'm looking forward to such opportunities. The training was all smashing stuff, just the right level of detail, delivered professionally and with great style and humor. I was especially honored to be paired off for my er, coaching with Debra Lilley (@debralilley), who shared with everyone all kinds of tips and insights from her experiences of delivering the message and demo. For me, that was the real power of the FXA event--the communal, conversational aspect--the meeting up with people who had done all this for real, the sharing in their experiences, while learning along with other newbies. Sorry, but that all-important social aspect doesn't work so well with remote meetings. Katie Candland (Apps-UX) gave us a great tour of the Fusion Apps demo and included some useful presentational tips too (any excuse to buy that iPad). It's clear to me that the Fusion Apps messaging and demos really come alive with real-world examples that local application users will recognize, and I picked up some "yes, that's my job made easier" scene-stealers from Debra and Karen Brownfield too, to add to the great ones already provided. This power of examples shouldn't surprise anyone, they've long been a mainstay of applications user assistance, popular with users. We'll offer customers different types of example topics in the Fusion Apps online help too (stay tuned), and we know from research how important those 3S's (stories, scenarios, and simulations) are to users when they consume and apply information. Well, we've got the simulation, now it's time for more stories and scenarios. If you get a chance to participate in an FXA event (whether you are an Oracle employee or otherwise), I'd encourage it. It's committing your time and energy for sure, but I got real bang for the buck from it for my everyday job too. Listening to the room's feedback on the application demo really brought our internal design work to life, and I picked up on some things that I need to follow up on (like how you alphabetically sort stuff in other languages). User experience is after all, about users. What will I be doing next, and what would I like to see happen? Obviously, I need to develop my story-telling links with the people I met in Long Beach and do some practicing with the materials, and then get out there and deliver them at a suitable location. The demo is what it is right now, and that's a super-rich demo that I know everyone will want to see and ask questions about. Then, as mentioned by attendees at the FXA event, follow up on those translated and localized messages for EMEA (and APAC), that deal with different statutory or reporting requirements of the target markets. Given my background I would say that, wouldn't I? However, language is part of the UX, and international revenue is greater than US-only revenue for Oracle, so yes dear, we all need to get over the fact that enterprise apps users don't all speak, or want to speak, American-English. Most importantly perhaps, the continued development of a strong messaging community between Oracle and partners and customers where we can swap and share those FXA messaging stories and scenarios about Fusion Apps in a conversational way. The more the better, a combination of online and face-to-face meetings. I must also mention the great dinner after the event at Parker's Lighthouse, and the fun myself and Andrew Gilmour (Apps-UX) had at our end of the table talking about just about everything except Fusion Apps with Ronald Van Luttikhuizen and Ben Prusinski (who now understands the difference between Cork and Dublin people. I hope). Thanks to all the Apps-UXers who helped bring the FXA training to town, and to Debra and all the others that I am too jetlagged to mention right who were instrumental in making it happen for me. Here's to the next one. And the Walmart angle? That was me doing my Robert Scoble (ScO'bilizer?)-style guerilla smart phone research in Walmart in Long Beach, before the FXA event. It's all about stories for me. You can read more about it on the appslab blog (see the comments).

    Read the article

  • Reading train stop display names from a resource bundle

    - by Frank Nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In Oracle JDeveloper 11g R1, you set the display name of a train stop of an ADF bounded task flow train model by using the Oracle JDeveloper Structure Window. To do so Double-click onto the bounded task flow configuration file (XML) located in the Application Navigator so the task flow diagram open In the task flow diagram, select the view activity node for which you want to define the display name. In the Structure Window., expand the view activity node and then the train-stop node therein Add the display name element by using the right-click context menu on the train-stop node, selecting Insert inside train-stop > Display Name Edit the Display Name value with the Property Inspector Following the steps outlined above, you can define static display names – like "PF1" for page fragment 1 shown in the image below - for train stops to show at runtime. In the following, I explain how you can change the static display string to a dynamic string that reads the display label from a resource bundle so train stop labels can be internationalized. There are different strategies available for managing message bundles within an Oracle JDeveloper project. In this blog entry, I decided to build and configure the default properties file as indicated by the projects properties. To learn about the suggested file name and location, open the JDeveloper project properties (use a right mouse click on the project node in the Application Navigator and choose Project Properties. Select the Resource Bundle node to see the suggested name and location for the default message bundle. Note that this is the resource bundle that Oracle JDeveloper would automatically create when you assign a text resource to an ADF Faces component in a page. For the train stop display name, we need to create the message bundle manually as there is no context menu help available in Oracle JDeveloper. For this, use a right mouse click on the JDeveloper project and choose New | General | File from the menu and in the opened dialog. Specify the message bundle file name as the name looked up before in the project properties Resource Bundle option. Also, ensure that the file is saved in a directory structure that matches the package structure shown in the Resource Bundle dialog. For example, you would save the properties file in the View Project's src > adf > sample directory if the package structure was "adf.sample" (adf.sample.ViewControllerBundle). Edit the properties file and define key – values pairs for the train stop component. In the sample, such key value pairs are TrainStop1=Train Stop 1 TrainStop2=Train Stop 2 TrainStop3=Train Stop 3 Next, double click the faces-config.xml file and switch the opened editor to the Overview tab. Select the Application category and press the green plus icon next to the Resource Bundle section. Define the resource bundle Base Name as the package and properties file name, for example adf.sample.ViewControllerBundle Finally, define a variable name for the message bundle so the bundle can be accessed from Expression Language. For this blog example, the name is chosen as "messageBundle". <resource-bundle>   <base-name>adf.sample.ViewControllerBundle</base-name>   <var>messageBundle</var> </resource-bundle> Next, select the display-name element in the train stop node (similar to when creating the display name) and use the Property Inspector to change the static display string to an EL expression referencing the message bundle. For example: #{messageBundle.TrainStop1} At runtime, the train stops now show display names read from a message bundle (the properties file).

    Read the article

  • The Sound of Two Toilets Flushing: Constructive Criticism for Virgin Atlantic Complaints Department

    - by Geertjan
    I recently had the experience of flying from London to Johannesburg and back with Virgin Atlantic. The good news was that it was the cheapest flight available and that the take off and landing were absolutely perfect. Hence I really have no reason to complain. Instead, I'd like to offer some constructive criticism which hopefully Richard Branson will find sometime while googling his name. Or maybe someone from the Virgin Atlantic Complaints Department will find it, whatever, just want to put this information out there. Arrangement of restroom facilities. Maybe next time you design an airplane, consider not putting your toilets at a right angle right next to your rows of seats. Being able to reach, without even needing to stretch your arm, from your seat to close, yet again, a toilet door that someone, someone obviously sitting very far from the toilets, carelessly forgot to close is not an indicator of quality interior design. Have you noticed how all other airplanes have their toilets in a cubicle separated from the rows of seats? On those airplanes, people sitting in the seats near the toilets are not constantly being woken up throughout the night whenever someone enters/exits the toilet, whenever the light in the toilet is suddenly switched on, and whenever one of the toilets flushes. Bonus points for Virgin Atlantic passengers in the seats adjoining the toilets is when multiple toilets are flushed simultaneously and multiple passengers enter/exit them at the same time, a bit like an unasked for low budget musical of suddenly illuminated grumpy people in crumpled clothes. What joy that brings at 3 AM is hard to describe. Seats with extra leg room. You know how other airplanes have the seats with the extra leg room? You know what those seats tend to have? Extra leg room. It's really interesting how Virgin Atlantic's seats with extra leg room actually have no extra leg room at all. It should have been a give away, the fact that these special seats are found in the same rows as the standard seats, rather than on the cusp of real glory which is where most airlines put their extra leg room seats, with the only actual difference being that they have a slightly different color. Had you called them "seats with a different color" (i.e., almost not quite green, rather than something vaguely hinting at blue), at least I'd have known what I was getting. Picture the joy at 3 AM, rudely awakened from nightmarish slumber, partly grateful to have been released from a grayish dream of faceless zombies resembling one or two of those in a recent toilet line, by multiple adjoining toilets flushing simultaneously, while you're sitting in a seat with extra leg room that has exactly as much leg room as the seats in neighboring rows. You then have a choice of things to be sincerely annoyed about. Food from the '80's. In the '80's, airplane food came in soggy containers and even breakfast, the most important meal of the day, was a sad heap of vaguely gray colors. The culinary highlight tended to be a squashed tomato, which must have been mashed to a pulp with a brick prior to being regurgitated by a small furry animal, and there was also always a piece of immensely horrid pumpkin, as well as a slice of spongy something you'd never seen before. Sausages and mash at 6 AM on an airplane was always a heavy lump of horribleness. Thankfully, all airlines throughout the world changed from this puke inducing strategy around 1987 sometime. Not Virgin Atlantic, of course. The fatty sausages and mash are still there, bringing you flashbacks to Duran Duran, which is what you were listening to (on your walkman) the last time you saw it in an airplane. Even the golden oldie "squashed tomato attached by slime to three wet peas" is on the menu. How wonderful to have all this in a cramped seat with a long row of early morning bleariness lined up for the toilets, right at your side, bumping into your elbow, groggily, one by one, one after another, more and more, fumble-open-door-silence-flush-fumble-open-door, and on and on, while you tentatively push your fork through a soggy pile of colorless mush, fighting the urge to throw up on the stinky socks of whatever nightmarish zombie is bumping into your elbow at the time. But, then again, the plane landed without a hitch, in fact, extremely smoothly, so I'm certainly not blaming the pilots.

    Read the article

  • Oracle Romania Summer School

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 What would you say about a Summer School within a corporation where you can learn, play and practice? You might think that this is something usually uncommon for a company and you would be right. However, Oracle’s main value being innovation, we came up with a new project for Romanian students and graduates. We organised Oracle Summer School , offering them the opportunity to develop their soft skills and gain valuable business knowledge and exposure. How was Oracle Summer School programme organised? We focused on students and graduates’ needs and combined business experience with training and practice. The twenty four participants had different backgrounds, being interested in Software, Hardware, Finance, Marketing or other areas. The programme fulfilled each of these needs, bringing them in contact with Specialists and Managers. The first two weeks were dedicated to the company visits, business presentations and networking. The participants got an insight about employees’ activities and projects. Storytelling was also part of the program and people from different departments spent a couple of hours with the participants, sharing their experiences, knowledge and interesting stories. The Recruitment team delivered a training about the job interview skills in order to make the participants feel better prepared for a Recruitment process. The second module consisted of two weeks of Soft Skills trainings delivered by professional trainers from different departments. The participants gained useful insight on the competencies required within a business environment. The evenings were dedicated to social activities and it not very long until they started feel part of a team. The third module will take place at the end of September and will put the participants in contact with senior people from the business who will become their Mentors. What do the participants say about Oracle Summer School? “ As a fresh computer science graduate, Oracle Summer School gave me the opportunity of finding what are the technical and nontechnical skills required in a large multinational company. It was a great way of seeing how the theoretical knowledge I received during college is applied in real-life scenarios and what skills I still need to develop. “  (Cosmin Radu) “ When arriving at Oracle I had high expectations, but did not know exactly what was going to unfold because of the program's lack of precedence. Right after the first day, my feedback outgrew the initial forecast and the following weeks continued to build upon it. I had the pleasure to acquaint with brilliant people. The program was outlined on various profiles, delivering a comprehensive experience. It was very engaging, informative and nevertheless fun. “ (Vlad Manciu) „ Oracle Summer School is by far the best summer school that I have ever attended. For me it has been a great experience so far, because I’ve learned not only how to use soft skills in a corporate environment, but I’ve learned a great deal about myself as well. However, the most valuable asset of this 3-week period were the people that I’ve met: great individuals and great professionals, whom I really grew fond of.” (Alexandru Purcarea) “Applying to Oracle Summer School has been the best decision I took in regard to how to spend my summer holiday. I had the chance to do job shadowing at some of the departments I was interested in and I attended great trainings on various subjects such as time management and emotional intelligence. Moreover, I made friends with the other participants and we enjoyed going out together after “classes”.(Andreea Tudor) If you are interested in joining our team and attending our events please follow us on https://campus.oracle.com/campus/HR/emea_main.html /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Redirecting to a dynamic page

    - by binarydev
    I have a page displaying blog posts (latest_posts.php) and another page that display single blog posts (blog.php) . I intend to link the image title in latest_posts.php so that it redirects to blog.php where it would display the particular post that was clicked. latest_posts.php: <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Posts list --> <ul class="post-list post-list-1"> <?php /* Fetches Date/Time, Post Content and title */ include 'dbconnect.php'; $sql = "SELECT * FROM wp_posts"; $res = mysql_query($sql); while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <!-- Header --> <h4> <a href="post.php? . $row['ID'] . "> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; }?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> </ul> <!-- /Posts list --> <a href="blog.php" class="button-browse">Browse All Posts</a> </div> <?php require_once('include/twitter_user_timeline.php'); ?> blog.php: <?php require_once('include/header.php'); ?> <body class="blog"> <?php require_once('include/navigation_bar_blog.php'); ?> <div class="blog"> <div class="main"> <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Layout 66x33 --> <div class="layout-p-66x33 clear-fix"> <!-- Left column --> <!-- <div class="column-left"> --> <!-- Posts list --> <ul class="post-list post-list-2"> <?php /* Fetches Date/Time, Post Content and title with Pagination */ include 'dbconnect.php'; //sets to default page if(empty($_GET['pn'])){ $page=1; } else { $page = $_GET['pn']; } // Index of the page $index = ($page-1)*3; $sql = "SELECT * FROM `wp_posts` ORDER BY `post_date` DESC LIMIT " . $index . " ,3"; $res = mysql_query($sql); //Loops through the values while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <?php $id = $_GET['ID']; $post = lookup_post_somehow($id); if($post) { // render post } else { echo 'blog post not found..'; } ?> <!-- Header --> <h4> <a href="post.php"> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; ?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> <?php } // close while loop ?> </ul> <!-- /Posts list --> <div><!-- Pagination --> <ul class="blog-pagination clear-fix"> <?php //Count the number of rows $numberofrows = mysql_query("SELECT COUNT(ID) FROM `wp_posts`"); //Do ciel() to round the result according to number of posts $postsperpage = 4; $numOfPages = ceil($numberofrows / $postsperpage); for($i=1; $i < $numOfPages; $i++) { //echos links for each page $paginationDisplay = '<li><a href="blog.php?pn=' . $i . '">' . $i . '</a></li>'; echo $paginationDisplay; } ?> <!-- <li><a href="#" class="selected">1</a></li> <li><a href="#">2</a></li> <li><a href="#">3</a></li> <li><a href="#">4</a></li> --> </ul> </div><!-- /Pagination --> <!-- /div> --> <!-- Left column --> </div> <!-- /Layout 66x33 --> </div> </div> <?php require_once('include/twitter_user_timeline.php'); ?> <?php require_once('include/footer_blog.php'); ?> How do I render?

    Read the article

  • Myths about Coding Craftsmanship part 2

    - by tom
    Myth 3: The source of all bad code is inept developers and stupid people When you review code is this what you assume?  Shame on you.  You are probably making assumptions in your code if you are assuming so much already.  Bad code can be the result of any number of causes including but not limited to using dated techniques (like boxing when generics are available), not following standards (“look how he does the spacing between arguments!” or “did he really just name that variable ‘bln_Hello_Cats’?”), being redundant, using properties, methods, or objects in a novel way (like switching on button.Text between “Hello World” and “Hello World “ //clever use of space character… sigh), not following the SOLID principals, hacking around assumptions made in earlier iterations / hacking in features that should be worked into the overall design.  The first two issues, while annoying are pretty easy to spot and can be fixed so easily.  If your coding team is made up of experienced professionals who are passionate about staying current then these shouldn’t be happening.  If you work with a variety of skills, backgrounds, and experience then there will be some of this stuff going on.  If you have an opportunity to mentor such a developer who is receptive to constructive criticism don’t be a jerk; help them and the codebase will improve.  A little patience can improve the codebase, your work environment, and even your perspective. The novelty and redundancy I have encountered has often been the use of creativity when language knowledge was perceived as unavailable or too time consuming.  When developers learn on the job you get a lot of this.  Rather than going to MSDN developers will use what they know.  Depending on the constraints of their assignment hacking together what they know may seem quite practical.  This was not stupid though I often wonder how much time is actually “saved” by hacking.  These issues are often harder to untangle if we ever do.  They can also grow out of control as we write hack after hack to make it work and get back to some development that is satisfying. Hacking upon an existing hack is what I call “feeding the monster”.  Code monsters are anti-patterns and hacks gone wild.  The reason code monsters continue to get bigger is that they keep growing in scope, touching more and more of the application.  This is not the result of dumb developers. It is probably the result of avoiding design, not taking the time to understand the problems or anticipate or communicate the vision of the product.  If our developers don’t understand the purpose of a feature or product how do we expect potential customers to do so? Forethought and organization are often what is missing from bad code.  Developers who do not use the SOLID principals should be encouraged to learn these principals and be given guidance on how to apply them.  The time “saved” by giving hackers room to hack will be made up for and then some. Not as technical debt but as shoddy work that if not replaced will be struggled with again and again.  Bad code is not the result of dumb developers (usually) it is the result of trying to do too much without the proper resources and neglecting the right thing that needs doing with the first thoughtless thing that comes into our heads. Object oriented code is all about relationships between objects.  Coders who believe their coworkers are all fools tend to write objects that are difficult to work with, not eager to explain themselves, and perform erratically and irrationally.  If you constantly find you are surrounded by idiots you may want to ask yourself if you are being unreasonable, if you are being closed minded, of if you have chosen the right profession.  Opening your mind up to the idea that you probably work with rational, well-intentioned people will probably make you a better coder and it might even make you less grumpy.  If you are surrounded by jerks who do not engage in the exchange of ideas who do not care about their customers or the durability of the code you are building together then I suggest you find a new place to work.  Myth 4: Customers don’t care about “beautiful” code Craftsmanship is customer focused because it means that the job was done right, the product will withstand the abuse, modifications, and scrutiny of our customers.  Users can appreciate a predictable timeline for a release, a product delivered on time and on budget, a feature set that does not interfere with the task(s) it is supporting, quick turnarounds on exception messages, self healing issues, and less issues.  These are all hindered by skimping on craftsmanship.  When we write data access and when we write reusable code.   What do you think?  Does bad code come primarily from low IQ individuals?  Do customers care about beautiful code?

    Read the article

  • Most efficient way to implement delta time

    - by Starkers
    Here's one way to implement delta time: /// init /// var duration = 5000, currentTime = Date.now(); // and create cube, scene, camera ect ////// function animate() { /// determine delta /// var now = Date.now(), deltat = now - currentTime, currentTime = now, scalar = deltat / duration, angle = (Math.PI * 2) * scalar; ////// /// animate /// cube.rotation.y += angle; ////// /// update /// requestAnimationFrame(render); ////// } Could someone confirm I know how it works? Here what I think is going on: Firstly, we set duration at 5000, which how long the loop will take to complete in an ideal world. With a computer that is slow/busy, let's say the animation loop takes twice as long as it should, so 10000: When this happens, the scalar is set to 2.0: scalar = deltat / duration scalar = 10000 / 5000 scalar = 2.0 We now times all animation by twice as much: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 2.0; angle = (Math.PI * 4) // which is 2 rotations When we do this, the cube rotation will appear to 'jump', but this is good because the animation remains real-time. With a computer that is going too quickly, let's say the animation loop takes half as long as it should, so 2500: When this happens, the scalar is set to 0.5: scalar = deltat / duration scalar = 2500 / 5000 scalar = 0.5 We now times all animation by a half: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 0.5; angle = (Math.PI * 1) // which is half a rotation When we do this, the cube won't jump at all, and the animation remains real time, and doesn't speed up. However, would I be right in thinking this doesn't alter how hard the computer is working? I mean it still goes through the loop as fast as it can, and it still has render the whole scene, just with different smaller angles! So this a bad way to implement delta time, right? Now let's pretend the computer is taking exactly as long as it should, so 5000: When this happens, the scalar is set to 1.0: angle = (Math.PI * 2) * scalar; angle = (Math.PI * 2) * 1; angle = (Math.PI * 2) // which is 1 rotation When we do this, everything is timsed by 1, so nothing is changed. We'd get the same result if we weren't using delta time at all! My questions are as follows Mostly importantly, have I got the right end of the stick here? How do we know to set the duration to 5000 ? Or can it be any number? I'm a bit vague about the "computer going too quickly". Is there a way loop less often rather than reduce the animation steps? Seems like a better idea. Using this method, do all of our animations need to be timesed by the scalar? Do we have to hunt down every last one and times it? Is this the best way to implement delta time? I think not, due to the fact the computer can go nuts and all we do is divide each animation step and because we need to hunt down every step and times it by the scalar. Not a very nice DSL, as it were. So what is the best way to implement delta time? Below is one way that I do not really get but may be a better way to implement delta time. Could someone explain please? // Globals INV_MAX_FPS = 1 / 60; frameDelta = 0; clock = new THREE.Clock(); // In the animation loop (the requestAnimationFrame callback)… frameDelta += clock.getDelta(); // API: "Get the seconds passed since the last call to this method." while (frameDelta >= INV_MAX_FPS) { update(INV_MAX_FPS); // calculate physics frameDelta -= INV_MAX_FPS; } How I think this works: Firstly we set INV_MAX_FPS to 0.01666666666 How we will use this number number does not jump out at me. We then intialize a frameDelta which stores how long the last loop took to run. Come the first loop frameDelta is not greater than INV_MAX_FPS so the loop is not run (0 = 0.01666666666). So nothing happens. Now I really don't know what would cause this to happen, but let's pretend that the loop we just went through took 2 seconds to complete: We set frameDelta to 2: frameDelta += clock.getDelta(); frameDelta += 2.00 Now we run an animation thanks to update(0.01666666666). Again what is relevance of 0.01666666666?? And then we take away 0.01666666666 from the frameDelta: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 2 - 0.01666666666 frameDelta = 1.98333333334 So let's go into the second loop. Let's say it took 2(? Why not 2? Or 12? I am a bit confused): frameDelta += clock.getDelta(); frameDelta = frameDelta + clock.getDelta(); frameDelta = 1.98333333334 + 2 frameDelta = 3.98333333334 This time we enter the while loop because 3.98333333334 = 0.01666666666 We run update We take away 0.01666666666 from frameDelta again: frameDelta -= INV_MAX_FPS; frameDelta = frameDelta - INV_MAX_FPS; frameDelta = 3.98333333334 - 0.01666666666 frameDelta = 3.96666666668 Now let's pretend the loop is super quick and runs in just 0.1 seconds and continues to do this. (Because the computer isn't busy any more). Basically, the update function will be run, and every loop we take away 0.01666666666 from the frameDelta untill the frameDelta is less than 0.01666666666. And then nothing happens until the computer runs slowly again? Could someone shed some light please? Does the update() update the scalar or something like that and we still have to times everything by the scalar like in the first example?

    Read the article

  • PanelGridLayout - A Layout Revolution

    - by Duncan Mills
    With the most recent 11.1.2 patchset (11.1.2.3) there has been a lot of excitement around ADF Essentials (and rightly so), however, in all the fuss I didn't want an even more significant change to get missed - yes you read that correctly, a more significant change! I'm talking about the new panelGridLayout component, I can confidently say that this one of the most revolutionary components that we've introduced in 11g, even though it sounds rather boring. To be totally accurate, panelGrid was introduced in 11.1.2.2 but without any presence in the component palette or other design time support, so it was largely missed unless you read the release notes. However in this latest patchset it's finally front and center. Its time to explore - we (really) need to talk about layout.  Let's face it,with ADF Faces rich client, layout is a rather arcane pursuit, once you are a layout master, all bow before you, but it's more of an art than a science, and it is often, in fact, way too difficult to achieve what should (apparently) be a pretty simple. Here's a great example, it's a homework assignment I set for folks I'm teaching this stuff to:  The requirements for this layout are: The header is 80px high, the footer is 30px. These are both fixed.  The first section of the header containing the logo is 180px wide The logo is centered within the top left hand corner of the header  The title text is start aligned in the center zone of the header and will wrap if the browser window is narrowed. It should be aligned in the center of the vertical space  The about link is anchored to the right hand side of the browser with a 20px gap and again is center aligned vertically. It will move as the browser window is reduced in width. The footer has a right aligned copyright statement, again middle aligned within a 30px high footer region and with a 20px buffer to the right hand edge. It will move as the browser window is reduced in width. All remaining space is given to a central zone, which, in this case contains a panelSplitter. Expect that at some point in time you'll need a separate messages line in the center of the footer.  In the homework assigment I set I also stipulate that no inlineStyles can be used to control alignment or margins and no use of other taglibs (e.g. JSF HTML or Trinidad HTML). So, if we take this purist approach, that basic page layout (in my stock solution) requires 3 panelStretchLayouts, 5 panelGroupLayouts and 4 spacers - not including the spacer I use for the logo and the contents of the central zone splitter - phew! The point is that even a seemingly simple layout needs a bit of thinking about, particulatly when you consider strechting and browser re-size behavior. In fact, this little sample actually teaches you much of what you need to know to become vaguely competant at layouts in the framework. The underlying result of "the way things are" is that most of us reach for panelStretchLayout before even finishing the first sip of coffee as we embark on a new page design. In fact most pages you will see in any moderately complex ADF page will basically be nested panelStretchLayouts and panelGroupLayouts, sometimes many, many levels deep. So this is a problem, we've known this for some time and now we have a good solution. (I should point out that the oft-used Trinidad trh tags are not a particularly good solution as you're tie-ing yourself to an HTML table based layout in that case with a host of attendent issues in resize and bi-di behavior, but I digress.) So, tadaaa, I give to you panelGridLayout. PanelGrid, as the name suggests takes a grid like (dare I say slightly gridbag-like) approach to layout, dividing your layout into rows and colums with margins, sizing, stretch behaviour, colspans and rowspans all rolled in, all without the use of inlineStyle. As such, it provides for a much more powerful and consise way of defining a layout such as the one above that is actually simpler and much more logical to design. The basic building blocks are the panelGridLayout itself, gridRow and gridCell. Your content sits inside the cells inside the rows, all helpfully allowing both streching, valign and halign definitions without the need to nest further panelGroupLayouts. So much simpler!  If I break down the homework example above my nested comglomorate of 12 containers and spacers can be condensed down into a single panelGrid with 3 rows and 5 cell definitions (39 lines of source reduced to 24 in the case of the sample). What's more, the actual runtime representation in the browser DOM is much, much simpler, and clean, with basically one DIV per cell (Note that just because the panelGridLayout semantics looks like an HTML table does not mean that it's rendered that way!) . Another hidden benefit is the runtime cost. Because we can use a single layout to achieve much more complex geometries the client side layout code inside the browser is having to work a lot less. This will be a real benefit if your application needs to run on lower powered clients such as netbooks or tablets. So, it's time, if you're on 11.1.2.2 or above, to smile warmly at your panelStretchLayouts, wrap the blanket around it's knees and wheel it off to the Sunset Retirement Home for a well deserved rest. There's a new kid on the block and it wants to be your friend. 

    Read the article

  • ASP.Net MVC Interview Questions and Answers

    - by Samir R. Bhogayta
    About ASP.Net MVC The ASP.Net MVC is the framework provided by Microsoft that lets you develop the applications that follows the principles of Model-View-Controller (MVC) design pattern. The .Net programmers new to MVC thinks that it is similar to WebForms Model (Normal ASP.Net), but it is far different from the WebForms programming.  This article will tell you how to quick learn the basics of MVC along with some frequently asked interview questions and answers on ASP.Net MVC 1. What is ASP.Net MVC The ASP.Net MVC is the framework provided by Microsoft to achieve     separation of concerns that leads to easy maintainability of the     application. Model is supposed to handle data related activity View deals with user interface related work Controller is meant for managing the application flow by communicating between Model and View. Normal 0 false false false EN-US X-NONE X-NONE 2. Why to use ASP.Net MVC The strength of MVC (i.e. ASP.Net MVC) listed below will answer this question MVC reduces the dependency between the components; this makes your code more testable. MVC does not recommend use of server controls, hence the processing time required to generate HTML response is drastically reduced. The integration of java script libraries like jQuery, Microsoft MVC becomes easy as compared to Webforms approach. 3. What do you mean by Razor The Razor is the new View engine introduced in MVC 3.0. The View engine is responsible for processing the view files [e.g. .aspx, .cshtml] in order to generate HTML response. The previous versions of MVC were dependent on ASPX view engine.  4. Can we use ASPX view engine in latest versions of MVC Yes. The Recommended way is to prefer Razor View 5. What are the benefits of Razor View?      The syntax for server side code is simplified      The length of code is drastically reduced      Razor syntax is easy to learn and reduces the complexity Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 6. What is the extension of Razor View file? .cshtml (for c#) and .vbhtml (for vb) 7. How to create a Controller in MVC Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Create a simple class and extend it from Controller class. The bare minimum requirement for a class to become a controller is to inherit it from ControllerBase is the class that is required to inherit to create the controller but Controller class inherits from ControllerBase. 8. How to create an Action method in MVC Add a simple method inside a controller class with ActionResult return type. 9. How to access a view on the server    The browser generates the request in which the information like Controller name, Action Name and Parameters are provided, when server receives this URL it resolves the Name of Controller and Action, after that it calls the specified action with provided parameters. Action normally does some processing and returns the ViewResult by specifying the view name (blank name searches according to naming conventions).   10. What is the default Form method (i.e. GET, POST) for an action method GET. To change this you can add an action level attribute e.g [HttpPost] Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 11. What is a Filter in MVC? When user (browser) sends a request to server an action method of a controller gets invoked; sometimes you may require executing a custom code before or after action method gets invoked, this custom code is called as Filter. 12. What are the different types of Filters in MVC? a. Authorization filter b. Action filter c. Result filter d. Exception filter [Do not forget the order mentioned above as filters gets executed as per above mentioned sequence] 13. Explain the use of Filter with an example? Suppose you are working on a MVC application where URL is sent in an encrypted format instead of a plain text, once encrypted URL is received by server it will ignore action parameters because of URL encryption. To solve this issue you can create global action filter by overriding OnActionExecuting method of controller class, in this you can extract the action parameters from the encrypted URL and these parameters can be set on filterContext to send plain text parameters to the actions.     14. What is a HTML helper? A HTML helper is a method that returns string; return string usually is the HTML tag. The standard HTML helpers (e.g. Html.BeginForm(),Html.TextBox()) available in MVC are lightweight as it does not rely on event model or view state as that of in ASP.Net server controls.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >