Search Results

Search found 31800 results on 1272 pages for 'nrf big show'.

Page 280/1272 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Java Spotlight Episode 138: Paul Perrone on Life Saving Embedded Java

    - by Roger Brinkley
    Interview with Paul Perrone, founder and CEO of Perrone Robotics, on using Java Embedded to test autonomous vehicle operations for the Insurance Institute for Highway Safety that will save lives. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News JDK 8 is Feature Complete Java SE 7 Update 25 Released What should the JCP be doing? 2013 Duke's Choice Award Nominations Another Quick update to Code Signing Article on OTN Events June 24, Austin JUG, Austin, TX June 25, Virtual Developer Day - Java, EMEA, 10AM CEST Jul 16-19, Uberconf, Denver, USA Jul 22-24, JavaOne Shanghai, China Jul 29-31, JVM Summit Language, Santa Clara Sep 11-12, JavaZone, Oslo, Norway Sep 19-20, Strange Loop, St. Louis Sep 22-26 JavaOne San Francisco 2013, USA Feature Interview Paul J. Perrone is founder/CEO of Perrone Robotics. Paul architected the Java-based general-purpose robotics and automation software platform known as “MAX”. Paul has overseen MAX’s application to rapidly field self-driving robotic cars, unmanned air vehicles, factory and road-side automation applications, and a wide range of advanced robots and automaton applications. He fielded a self-driving autonomous robotic dune buggy in the historic 2005 Grand Challenge race across the Mojave desert and a self-driving autonomous car in the 2007 Urban Challenge through a city landscape. His work has been featured in numerous televised and print media including the Discovery Channel, a theatrical documentary, scientific journals, trade magazines, and international press. Since 2008, Paul has also been working as the chief software engineer, CTO, and roboticist automating rock star Neil Young’s LincVolt, a 1959 Lincoln Continental retro-fitted as a fully autonomous extended range electric vehicle. Paul has been an engineer, author of books and articles on Java, frequent speaker on Java, and entrepreneur in the robotics and software space for over 20 years. He is a member of the Java Champions program, recipient of three Duke Awards including a Gold Duke and Lifetime Achievement Award, has showcased Java-based robots at five JavaOne keynotes, and is a frequent JavaOne speaker and show floor participant. He holds a B.S.E.E. from Rutgers University and an M.S.E.E. from the University of Virginia. What’s Cool Shenandoah: A pauseless GC for OpenJDK

    Read the article

  • Build an Organization Chart In Visio 2010

    - by Mysticgeek
    With trying to manage a business these days, it’s very important to have an Organization Chart to keep everything manageable. Here we’ll show you how to build one in Visio 2010. This Guest Article was written by our friends over at Office 2010 Club. Need for Organization Charts The need of creating Organization Charts are becoming indispensable these days, as companies start focusing on extensive hiring for far reach availability, increase in productivity and targeting diverse markets. Considering this rigorous change, creating an organization chart can help stakeholders in comprehending the ever growing organization structure & hierarchy with an ease. It shows the basic structure of organization along with defining the relationships between employees working in different departments. Opportunely, Microsoft Visio 2010 offers an easy way to create Organization chart. As before now, orthodox ways of listing organization hierarchy have been used for defining the structure of departments along with communication possible including; horizontal and vertical communications. To transform these lists which defines organizational structure, into a detailed chart, Visio 2010 includes an add-in for importing Excel spreadsheet, which comes in handy for pulling out data from spreadsheet to create an organization chart. Importantly, you don’t need to indulge yourself in maze of defining organizational hierarchies and chalking-out structure, as you just need to specify the column & row headers, along with data you need to import and it will automatically create out chart defining; organizational hierarchies with specified credentials of each employee, categorized in their corresponding departments. Creating Organization Charts in Visio 2010 To start off with, we have created an Excel spreadsheet having fields, Name, Supervisor, Designation, Department and Phone. The Name field contains name of all the employees working in different departments, whereas Supervisor field contains name of supervisors or team leads. This field is vital for creating Organization Chart, as it defines the basic structure & hierarchy in chart. Now launch Visio 2010, head over to View tab, under Add-Ons menu, from Business options, click Organization Chart Wizard. This will start Organization Chart Wizard, in the first step, enable Information that’s already stored in a file or database option, and click Next. As we are importing Excel sheet, select the second option for importing Excel spreadsheet. Specify the Excel file path and click Next to continue. In this step, you need to specify the fields which actually defines the structure of an organization. In our case, these are Name & Supervisor fields. After specifying fields, click Next to Proceed further. As organization chart is primarily for showing the hierarchy of departments/employees working in organization along with how they are linked together, and who supervises whom. Considering this, in this step we will leave out Supervisor field, because it’s inclusion wouldn’t be necessary as Visio automatically chalks-out the basic structure defined in Excel sheet. Add the rest of the fields under Displayed fields category, and click Next. Now choose the fields which you want to include in Organization Chart’s shapes and click Next. This step is about breaking the chart into multiple pages, if you are dealing with 100+ employees, you may want to specify numbers of pages on which Organization Chart will be displayed. But in our case, we are dealing with much less amount of data, so we will enable I want the wizard to automatically break my organization chart across pages option. Specify the name you need to show on the top of the page. If you are having less than 20 hierarchies, enter the name of the highest ranked employee in organization and click Finish to end the wizard. It will instantly create an Organization chart out of specified Excel spreadsheet. Highest ranked employee will be shown on top of the organization chart, supervising various employees from different departments. As shown below, his immediate subordinates further manages other employees and so on. For advance customizations, head over to Org Chart tab, here you will find different groups for setting up the Org Chart’s hierarchy and manage other employees’ positions. Under Arrange group, shapes’ arrangements can be changed and it provides easy navigation through the chart. You can also change the type of the position and hide subordinates of selected employee. From Picture group, you can insert a picture of the employees, departments, etc. From synchronization group, you have the option of creating a synced copy and expanding subordinates of selected employee. Under Organization Data group, you can change whole layout of Organization chart from Display Options including; shape display, show divider, enable/disable imported fields, change block position, and fill colors, etc. If at any point of time, you need to insert new position or announce vacancy, Organization Chart stencil is always available on the left sidebar. Drag the desired Organization Chart shape into main diagram page, to maintain the structure integrity, i.e, for inserting subordinates for a specific employee, drag the position shape over the existing employee shape box. For instance, We have added a consultant in organization, who is directly under CEO, for maintaining this, we have dragged the Consultant box and just dropped it over the CEO box to make the immediate subordinate position. Adding details to new position is a cinch, just right-click new position box and click Properties. This will open up Shape Data dialog, start filling in all the relevant information and click OK. Here you can see the newly created position is easily populated with all the specified information. Now expanding an Organization Chart doesn’t require maintenance of long lists any more. Under Design tab, you can also try out different designs & layouts over organization chart to make it look more flamboyant and professional.  Conclusion An Organization Chart is a great way of showing detailed organizational hierarchies; with defined credentials of employees, departments structure, new vacancies, newly hired employees, recently added departments, and importantly shows most convenient way of interaction between different departments & employees, etc. Similar Articles Productive Geek Tips Geek Reviews: Using Dia as a Free Replacement for Microsoft VisioMysticgeek Blog: Create Appealing Charts In Excel 2007Create Charts in Excel 2007 the Easy Way with Chart AdvisorCreate a Hyperlink in a Word 2007 Flow Chart and Hide Annoying ScreenTipsCreate A Flow Chart In Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video

    Read the article

  • Have you really fixed that problem?

    - by DavidWimbush
    The day before yesterday I saw our main live server's CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you'd want to see. We're log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We're on SQL2005 so there's no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a 'live' reporting database that is completely rebuilt every 20 minutes. (I'm just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening - fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first. The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it's not a good guide what's using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes. So the moral is: even when you're convinced you've found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone. And, yes, I have checked the CPU again today and it's still looking sweet.

    Read the article

  • A Great Keyword Research Tip - How to Do the Right Keyword Research

    There are many different kinds of affiliate marketing advice being thrown out by different Internet marketing experts. They claim that it will lead to affiliate success. Most will say that an appropriate website for a certain target market would do the trick. The usual barometer, of course, is the Google ranking (these show the website traffic)-however, these hardly translate to a high amount of sales.

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Writing A Transact SQL (TSQL) Procedure For SQL Server 2008 To Delete Rows From Table Safely

    In this post, we will show and explain a small TSQL Sql Server 2008 procedure that deletes all rows in a table that are older than some specified date.  That is, say the table has 10,000,000 rows in it the accumulated over the past 2 years.  Say you want to delete all but [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • DHCPv6: Provide IPv6 information in your local network

    Even though IPv6 might not be that important within your local network it might be good to get yourself into shape, and be able to provide some details of your infrastructure automatically to your network clients. This is the second article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. IPv6 addresses for everyone (in your network) Okay, after setting up the configuration of your local system, it might be interesting to enable all your machines in your network to use IPv6. There are two options to solve this kind of requirement... Either you're busy like a bee and you go around to configure each and every system manually, or you're more the lazy and effective type of network administrator and you prefer to work with Dynamic Host Configuration Protocol (DHCP). Obviously, I'm of the second type. Enabling dynamic IPv6 address assignments can be done with a new or an existing instance of a DHCPd. In case of Ubuntu-based installation this might be isc-dhcp-server. The isc-dhcp-server allows address pooling for IP and IPv6 within the same package, you just have to run to independent daemons for each protocol version. First, check whether isc-dhcp-server is already installed and maybe running your machine like so: $ service isc-dhcp-server6 status In case, that the service is unknown, you have to install it like so: $ sudo apt-get install isc-dhcp-server Please bear in mind that there is no designated installation package for IPv6. Okay, next you have to create a separate configuration file for IPv6 address pooling and network parameters called /etc/dhcp/dhcpd6.conf. This file is not automatically provided by the package, compared to IPv4. Again, use your favourite editor and put the following lines: $ sudo nano /etc/dhcp/dhcpd6.conf authoritative;default-lease-time 14400; max-lease-time 86400;log-facility local7;subnet6 2001:db8:bad:a55::/64 {    option dhcp6.name-servers 2001:4860:4860::8888, 2001:4860:4860::8844;    option dhcp6.domain-search "ios.mu";    range6 2001:db8:bad:a55::100 2001:db8:bad:a55::199;    range6 2001:db8:bad:a55::/64 temporary;} Next, save the file and start the daemon as a foreground process to see whether it is going to listen to requests or not, like so: $ sudo /usr/sbin/dhcpd -6 -d -cf /etc/dhcp/dhcpd6.conf eth0 The parameters are explained quickly as -6 we want to run as a DHCPv6 server, -d we are sending log messages to the standard error descriptor (so you should monitor your /var/log/syslog file, too), and we explicitely want to use our newly created configuration file (-cf). You might also use the command switch -t to test the configuration file prior to running the server. In my case, I ended up with a couple of complaints by the server, especially reporting that the necessary lease file wouldn't exist. So, ensure that the lease file for your IPv6 address assignments is present: $ sudo touch /var/lib/dhcp/dhcpd6.leases$ sudo chown dhcpd:dhcpd /var/lib/dhcp/dhcpd6.leases Now, you should be good to go. Stop your foreground process and try to run the DHCPv6 server as a service on your system: $ sudo service isc-dhcp-server6 startisc-dhcp-server6 start/running, process 15883 Check your log file /var/log/syslog for any kind of problems. Refer to the man-pages of isc-dhcp-server and you might check out Chapter 22.6 of Peter Bieringer's IPv6 Howto. The instructions regarding DHCPv6 on the Ubuntu Wiki are not as complete as expected and it might not be as helpful as this article or Peter's HOWTO. But see for yourself. Does the client get an IPv6 address? Running a DHCPv6 server on your local network surely comes in handy but it has to work properly. The following paragraphs describe briefly how to check the IPv6 configuration of your clients, Linux - ifconfig or ip command First, you have enable IPv6 on your Linux by specifying the necessary directives in the /etc/network/interfaces file, like so: $ sudo nano /etc/network/interfaces iface eth1 inet6 dhcp Note: Your network device might be eth0 - please don't just copy my configuration lines. Then, either restart your network subsystem, or enable the device manually using the dhclient command with IPv6 switch, like so: $ sudo dhclient -6 You would either use the ifconfig or (if installed) the ip command to check the configuration of your network device like so: $ sudo ifconfig eth1eth1      Link encap:Ethernet  HWaddr 00:1d:09:5d:8d:98            inet addr:192.168.160.147  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: 2001:db8:bad:a55::193/64 Scope:Global          inet6 addr: fe80::21d:9ff:fe5d:8d98/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 Looks good, the client has an IPv6 assignment. Now, let's see whether DNS information has been provided, too. $ less /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 2001:4860:4860::8888nameserver 2001:4860:4860::8844nameserver 192.168.1.2nameserver 127.0.1.1search ios.mu Nicely done. Windows - netsh Per description on TechNet the netsh is defined as following: "Netsh is a command-line scripting utility that allows you to, either locally or remotely, display or modify the network configuration of a computer that is currently running. Netsh also provides a scripting feature that allows you to run a group of commands in batch mode against a specified computer. Netsh can also save a configuration script in a text file for archival purposes or to help you configure other servers." And even though TechNet states that it applies to Windows Server (only), it is also available on Windows client operating systems, like Vista, Windows 7 and Windows 8. In order to get or even set information related to IPv6 protocol, we have to switch the netsh interface context prior to our queries. Open a command prompt in Windows and run the following statements: C:\Users\joki>netshnetsh>interface ipv6netsh interface ipv6>show interfaces Select the device index from the Idx column to get more details about the IPv6 address and DNS server information (here: I'm going to use my WiFi device with device index 11), like so: netsh interface ipv6>show address 11 Okay, address information has been provided. Now, let's check the details about DNS and resolving host names: netsh interface ipv6> show dnsservers 11 Okay, that looks good already. Our Windows client has a valid IPv6 address lease with lifetime information and details about the configured DNS servers. Talking about DNS server... Your clients should be able to connect to your network servers via IPv6 using hostnames instead of IPv6 addresses. Please read on about how to enable a local named with IPv6.

    Read the article

  • Silverlight Cream for February 23, 2011 -- #1051

    - by Dave Campbell
    In this Issue: Ian T. Lackey, Kevin Hoffman, Kunal Chowdhury, Jesse Liberty(-2-), Page Brooks, Deborah Kurata(-2-), and Paul Sheriff. Above the Fold: Silverlight: "Building a Radar Control in Silverlight–Part 2" Page Brooks WP7: "Reactive Drag and Drop Part 2" Jesse Liberty Expression Blend: "Simple RadioButtonList and / or CheckBoxList in Silverlight Using a Behavior" Ian T. Lackey Shoutouts: Kunal Chowdhury delivered a full day session on Silverlight at the Microsoft Imagine Cup Championship event in Mumbai... you can Download Microsoft Imagine Cup Session PPT on Silverlight Dennis Doomen has appeared in my blog any number of times... he's looking for some assistance: Get me on stage on the Developer Days 2011 Steve Wortham posted An Interview with Jeff Wilcox From SilverlightCream.com: Simple RadioButtonList and / or CheckBoxList in Silverlight Using a Behavior Ian T. Lackey bemoans the lack of a RadioButtonList or CheckBoxList, and jumps into Blend to show us how to make one using a behavior... and the code is available too! WP7 for iPhone and Android Developers - Introduction to XAML and Silverlight Continuing his series at SilvelightShow for iPhone and Android devs, Kevin Hoffman has part 2 up getting into the UI with an intro to XAML and Silverlight. Day 1: Working with Telerik Silverlight RadControls Kunal Chowdhury kicked my tires that I had missed his Telerik control series... He's detailing his experience getting up to speed with the Silverlight RadControls. Day 1 is intro, what there is, installing, stuff like that. Part 2 continues: Day 2: Working with BusyIndicator of Telerik Silverlight RadControls, followed (so far) by part 3: Day 3: Working with Masked TextBox of Telerik Silverlight RadControls Reactive Drag and Drop Part 2 Jesse Liberty has his 7th part about Rx up ... and the 2nd part of Reactive Drag and Drop, and oh yeah... it's for WP7 as well! Yet Another Podcast #25–Glenn Block / WCF Next Jesse Liberty has Glenn Block on stage for his Yet Another Podcast number 25... talking WCF with Glenn. Building a Radar Control in Silverlight–Part 2 Page Brooks has part 2 of his 'radar' control for Silverlight up... I don't know where I'd use this, but it's darned cool... and the live demo is amazing. Silverlight Charting: Setting Colors Deborah Kurata is looking at the charting controls now, and how to set colors. She begins with a previous post on charts and adds color definitions to that post. Silverlight Charting: Setting the Tooltip Deborah Kurata next gets into formatting the tooltip you can get when the user hovers over a chart to make it make more sense to your user 'Content' is NOT 'Text' in XAML Paul Sheriff discusses the Content property of XAML controls and how it can be pretty much any other XAML you want it to be, then goes on to show some nice examples. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    Using a static language like C# tends to work with hard assembly bindings for everything. But what if you want only want to provide an assembly optionally, if the functionality is actually used by the user? In this article I discuss a scenario where dynamic loading and activation made sense for me and show the code required to activate and use components loaded at runtime using Reflection and dynamic in combination.

    Read the article

  • Should I be wary of signing a non-disclosure agreement with someone I just met?

    - by Thomas Levine
    tl;dr: Some guy I just met says he wants me to join his company. Before he shows me what they do, he wants me to sign a non-disclosure agreement. Is this weird? I'm traveling right now. Someone who saw me coding seemed to think I was smart or something and started talking with me. He explained that he owns a software company, told me a bit about what it does and told me that he was looking for a programmer who would work for a stake in the company. He explained that the company's product is being developed rather secretly, so he couldn't tell me much. But he did tell enough about the product to convince me that he's not completely making this up, which is a decent baseline. He suggested that he show me more of what he's been working on and, after seeing that, I decide whether I want to join. Because of the secrecy behind the product, he wants me to sign a non-disclosure agreement before we talk. I'm obviously somewhat skeptical because of the random nature by which we met. In the short term, I'm wondering if I should be wary of signing such an agreement. He said it would be easier to show me the product in person rather than over the internet, and I'm leaving town tomorrow, so I'd have to figure this out by tomorrow. If I decide to talk with him, I could decide later whether I trust that it's worth spending any time on this company. The concept of being able to avoid telling a secret seems strange to me for the same reason that things like certain aspects of copyright seem strange. Should I be wary of signing a non-disclosure agreement? Is this common practice? I don't know the details of the agreement of which he was thinking, (If I end up meeting with him, I'll of course read over the agreement before I decide whether to sign it.) so I could consider alternatives according to the aspects of the agreement. Or I could just consider the case of an especially harsh agreement. This question seems vaguely related. Do we need a non-disclosure agreement (NDA)? Thanks

    Read the article

  • Google+ Platform Office Hours for March 7th 2012: REST API Overview

    Google+ Platform Office Hours for March 7th 2012: REST API Overview We hold weekly Google+ Platform Office Hours using Hangouts On Air most Wednesdays from 11:30 am until 12:15 pm PST. This week we took a step back and looked at the Google+ platform's REST APIs. We explain what they're capable of and show you how to get started using them. Discuss this video on Google+: goo.gl Learn more about our Office Hours: developers.google.com From: GoogleDevelopers Views: 3299 44 ratings Time: 31:45 More in Science & Technology

    Read the article

  • Google I/O 2011: Map your business, inside and out

    Google I/O 2011: Map your business, inside and out Brendan Kenny, Chris Broadfoot Your map doesn't have to end at the front door of the building! In this session we will discuss approaches to mapping all of your business locations, and not just on the outside. We'll show how to build a sensational storefinder, and then add floorplans, indoor Street View, and resource search. From: GoogleDevelopers Views: 4896 28 ratings Time: 51:31 More in Science & Technology

    Read the article

  • Gvim GLib-GObject-WARNING in ubuntu 13.10

    - by naveen.panwar
    I upgraded from ubuntu 13.04 to ubuntu 13.10 this afternoon. And when I try starting vim form the terminal after the upgrade, I get these warnings (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::sm-connect after class was initialised (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::show-crash-dialog after class was initialised (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::display after class was initialised (gvim:4054): GLib-GObject-WARNING **: Attempt to add property GnomeProgram::default-icon after class was initialised` How can I fix these and what exectly are these warnings about

    Read the article

  • American Modern Insurance Group recognized at 2010 INN VIP Best Practices Awards

    - by [email protected]
    Below: Helen Pitts (right), Oracle Insurance, congratulates Bruce Weisgerber, Munich Re, as he accepts a VIP Best Practices Award on behalf of American Modern Insurance Group.     Oracle Insurance Senior Product Marketing Manager Helen Pitts is attending the 2010 ACORD LOMA Insurance Forum this week at the Mandalay Bay Resort in Las Vegas, Nevada, and will be providing updates from the show floor. This is one of my favorite seasons of the year--insurance trade show season. It is a time to reconnect with peers, visit with partners, make new industry connections, and celebrate our customers' achievements. It's especially meaningful when we can share the experience of having one of our Oracle Insurance customers recognized for being an innovator in its business and in the industry. Congratulations to American Modern Insurance Group, part of the Munich Re Group. American Modern earned an Insurance Networking News (INN) 2010 VIP Best Practice Award yesterday evening during the 2010 ACORD LOMA Insurance Forum. The award recognizes an insurer's best practice for use of a specific technology and the role, if feasible, that ACORD data standards played as a part of their business and technology. American Modern received an Honorable Mention for leveraging the Oracle Documaker enterprise document automation solution to: Improve the quality of communications with customers in high value, high-touch lines of business Convert thousands of page elements or "forms" from their previous system, with near pixel-perfect accuracy Increase efficiency and reusability by storing all document elements (fonts, logos, approved wording, etc.) in one place Issue on-demand documents, such as address changes or policy transactions to multiple recipients at once Consolidate all customer communications onto a single platform Gain the ability to send documents to multiple recipients at once, further improving efficiency Empower agents to produce documents in real time via the Web, such as quotes, applications and policy documents, improving carrier-agent relationships Munich Re's Bruce Weisgerber accepted the award on behalf of American Modern from Lloyd Chumbly, vice president of standards at ACORD. In a press release issued after the ceremony Chumbly noted, "This award embodies a philosophy of efficiency--working smarter with standards, these insurers represent the 'best of the best' as chosen by a body of seasoned insurance industry professionals." We couldn't agree with you more, Lloyd. Congratulations again to American Modern on your continued innovation and success. You're definitely a VIP in our book! To learn more about how American Modern is putting its enterprise document automation strategy into practice, click here to read a case study. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves - June 23-29, 2013

    - by Bob Rhubart
    2,947 people now follow OTN ArchBeat on Facebook. Here are the Top 10 items shared on that page for June 23-29, 2013. Podcast Show Notes: DevOps, Cloud, and Role Creep After some confusion (my bad) all three CORRECT parts of this podcast are now available. The panelists for this discussion are all Oracle ACE Directors: Ron Batra, Basheer Khan, and Cary Millsap. SOA Suite 11g Developers Cookbook Published | Antony Reynolds "The book focuses on areas that we felt we had neglected in the Developers Guide, says co-author Antony Reynolds. "There is more about Java integration and OSB, both of which we see a lot of questions about when working with customers." Using Oracle TimesTen With Oracle BI Applications (Part 2) | Peter Scott Peter Scott follows up an earlier post with a look at some of the OBIA structures and a discussion of some of the features of TimesTen. Linux-Containers — Part 1: Overview | Lenz Grimmer OTN Garage blogger Lenz Grimmer kicks off a series and expands your mind with deep detail on Linux Containers Slides from my ODTUG Kscope13 Presentation | Zeeshan Baig Oracle ACE Zeeshan Baig shares the slides from his KScope13 presentation, "Build Your Business Services Using ADF Task Flows." Fun with Enterprise Manager | Rene van Wijk Oracle ACE Rene van Wijk shares some background and some tuning and other tech tips for working with Oracle Enterprise Manager. Using VirtualBox to test drive Windows Blue | The Fat Bloke The Fat Bloke shares a tech tip for those interested in giving Windows Blue a try on Virtual Box. Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture In this three-part series Oracle Fusion Middleware A-Team members Jennifer Briscoe, Clifford Musante, Mikael Ottosson, and Pardha Reddy talk about the origins and mission of the FMW A-Team and about the great technical content you'll find on the recently launched Oracle A-Team blog. Part one is now available. 5 Best Practices - Laying the Foundation for WebCenter Projects | John Brunswick Oracle WebCenter expert John Brunswick shares best practices that "enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration." Oracle Magazine - July/Aug 2013 The digital edition of the July/August edition of Oracle Magazine is now available. This issue includes my architect community column, "The CX Factor." which features insight from community members on "why and how CX has become a significant factor in enterprise IT." h

    Read the article

  • Game of Thrones : l'arme secrète de George R. R. Martin contre les virus, un PC sous DOS avec Wordstar 4.0 et sans internet

    Game of Thrones : L'arme secrète de George R. R. Martin contre les virus George R. R. MARTIN, auteur de la saga Game of Thrones et co-producteur de la série du même nom a révélé à un Talk-show américain avoir une arme secrète contre les virus qui pourraient attaquer son ordinateur et détruire ses documents. Pour écrire ses livres, il se sert d'un ordinateur non relié à internet et qui fonctionne sous DOS et comme traitement de texte Wordstar 4,0 ! À la question « pourquoi rester avec ce vieux...

    Read the article

  • Game of Thrones Theme Played on Eight Floppy Drives [Video]

    - by Asian Angel
    YouTube user MrSolidSnake745 has put together a fun (and awesome) rendition of the ‘Game of Thrones’ theme using eight floppy drives. Game Of Thrones Theme on eight floppy drives [via Geeks are Sexy] HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • What is the best Apache logs Analyzer?

    - by Evgeny
    What real-time log analyzer can you suggest for Apache access and error logs? There is a list of web analytics software on wikipedia, but it would be great to hear opinions from people with experience without having to try all of them. Please don't suggest Google Analytics or any other hosted/javascript analytics suites, already using them, GA is not real-time and it is missing some data that the logs show. For example 404 errors, script errors, the full query-string of the referral, IP addresses, visitor path through the website, etc ...

    Read the article

  • Google+ Platform Office Hours for February 8th 2012

    Google+ Platform Office Hours for February 8th 2012 This week's office hours were hosted by Jenny Murphy, Jonathan Beri and Wolff Dobson. Several developers from the Google+ developer community joined us. We spent the session responding to your questions and comments about the Google+ platform. Find the full show notes and discuss this session in our support forum: groups.google.com Learn more about our office hours on Google Developers: developers.google.com From: GoogleDevelopers Views: 4507 36 ratings Time: 47:50 More in Science & Technology

    Read the article

  • Google I/O 2011: Optimizing Android Apps with Google Analytics

    Google I/O 2011: Optimizing Android Apps with Google Analytics Nick Mihailovski, Philip Mui, Jim Cotugno Thousands of apps have taken advantage of Google Analytics' native Android tracking capabilities to improve the adoption and usability of Andriod Apps. This session covers best practices for tracking apps on mobile, TV and other devices. We'll also show you how to gain actionable insights from new tracking and reporting capabilities. From: GoogleDevelopers Views: 6819 34 ratings Time: 47:40 More in Science & Technology

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >