Search Results

Search found 15448 results on 618 pages for 'sound api'.

Page 531/618 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Introducing Windows Azure Mobile Services

    - by Clint Edmonson
    Today I’m excited to share that the Windows Azure Mobile Services public preview is now available. This preview provides a turnkey backend cloud solution designed to accelerate connected client app development. These services streamline the development process by enabling you to leverage the cloud for common mobile application scenarios such as structured storage, user authentication and push notifications. If you’re building a Windows 8 app and want a fast and easy path to creating backend cloud services, this preview provides the capabilities you need. You to take advantage of the cloud to build and deploy modern apps for Windows 8 devices in anticipation of general availability on October 26th. Subsequent preview releases will extend support to iOS, Android, and Windows Phone. Features The preview makes it fast and easy to create cloud services for Windows 8 applications within minutes. Here are the key benefits:  Rapid development: configure a straightforward and secure backend in less than five minutes. Create modern mobile apps: common Windows Azure plus Windows 8 scenarios that Windows Azure Mobile Services preview will support include:  Automated Service API generation providing CRUD functionality and dynamic schematization on top of Structured Storage Structured Storage with powerful query support so a Windows 8 app can seamlessly connect to a Windows Azure SQL database Integrated Authentication so developers can configure user authentication via Windows Live Push Notifications to bring your Windows 8 apps to life with up to date and relevant information Access structured data: connect to a Windows Azure SQL database for simple data management and dynamically created tables. Easy to set and manage permissions. Pricing One of the key things that we’ve consistently heard from developers about using Windows Azure with mobile applications is the need for a low cost and simple offer. The simplest way to describe the pricing for Windows Azure Mobile Services at preview is that it is the same as Windows Azure Websites during preview. What’s FREE? Run up to 10 Mobile Services for free in a multitenant environment Free with valid Windows Azure Free Trial 1GB SQL Database Unlimited ingress 165MB/day egress  What do I pay for? Scaling up to dedicated VMs Once Windows Azure Free Trial expires - SQL Database and egress     Getting Started To start using Mobile Services, you will need to sign up for a Windows Azure free trial, if you have not done so already.  If you already have a Windows Azure account, you will need to request to enroll in this preview feature. Once you’ve enrolled, this getting started tutorial will walk you through building your first Windows 8 application using the preview’s services. The developer center contains more resources to teach you how to: Validate and authorize access to data using easy scripts that execute securely, on the server Easily authenticate your users via Windows Live Send toast notifications and update live tiles in just a few lines of code Our pricing calculator has also been updated for calculate costs for these new mobile services. Questions? Ask in the Windows Azure Forums. Feedback? Send it to [email protected].

    Read the article

  • Why Is Hibernation Still Used?

    - by Jason Fitzpatrick
    With the increased prevalence of fast solid-state hard drives, why do we still have system hibernation? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Moses wants to know why he should use hibernate on a desktop machine: I’ve never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I’ve never truly understood why it’s used. With today’s technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me [on desktop computers] when modern technology is considered, but perhaps there are applications that I’m not considering. What was the original purpose behind hibernation, and why do people still use it? Quite a few people use hibernate, so what is Moses missing in the big picture? The Answer SuperUser contributor Vignesh4304 writes: Normally hibernate mode saves your computer’s memory, this includes for example open documents and running applications, to your hard disk and shuts down the computer, it uses zero power. Once the computer is powered back on, it will resume everything where you left off. You can use this mode if you won’t be using the laptop/desktop for an extended period of time, and you don’t want to close your documents. Simple Usage And Purpose: Save electric power and resuming of documents. In simple terms this comment serves nice e.g (i.e. you will sleep but your memories are still present). Why it’s used: Let me describe one sample scenario. Imagine your battery is low on power in your laptop, and you are working on important projects on your machine. You can switch to hibernate mode – it will result your documents being saved, and when you power on, the actual state of application gets restored. Its main usage is like an emergency shutdown with an auto-resume of your documents. MagicAndre1981 highlights the reason we use hibernate everyday: Because it saves the status of all running programs. I leave all my programs open and can resume working the next day very easily. Doing a real boot would require to start all programs again, load all the same files into those programs, get to the same place that I was at before, and put all my windows in exactly the same place. Hibernating saves a lot of work pulling these things back up again. It’s not unusual to find computers around the office here that have been hibernated day in and day out for months without an actual full system shutdown and restart. It’s enormously convenient to freeze your work space at the exact moment you stopped working and to turn right around and resume there the next morning. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • Artec TV18R Tuner not working :(

    - by Antillar Maximus
    Can somebody kindly help me setup my TV Tuner stick? Make and Model: Artec TV18R USB TV Tuner OS: Ubuntu 11.10 64bit Hardware: MSI MS1656-US (GX640 barebones): i5 580M, 2xIntel X25M SSDs, Radeon HD5850M,Realtek sound. Logs and messages: antillar@antillar-MS-1656:/usr/share/dvb/atsc$ [B][COLOR=Red]sudo lsusb[/COLOR][/B] Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 003: ID ffc0:001f Bus 002 Device 005: ID 05d8:8117 Ultima Electronics Corp. [/code][code]antillar@antillar-MS-1656:/usr/share/dvb/atsc$ So, it seems that the tuner is being detected, but the light on it does not turn on? Not sure what is going on? sudo lspci [sudo] password for antillar: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 05) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 05) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) 01:00.0 VGA compatible controller: ATI Technologies Inc Broadway PRO [Mobility Radeon HD 5800 Series] 01:00.1 Audio device: ATI Technologies Inc Juniper HDMI Audio [Radeon HD 5700 Series] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 04:00.0 FireWire (IEEE 1394): JMicron Technology Corp. IEEE 1394 Host Controller 04:00.1 System peripheral: JMicron Technology Corp. SD/MMC Host Controller 04:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller 04:00.3 System peripheral: JMicron Technology Corp. MS Host Controller 04:00.4 System peripheral: JMicron Technology Corp. xD Host Controller 05:00.0 Network controller: Intel Corporation Ultimate N WiFi Link 5300 ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) antillar@antillar-MS-1656:/usr/share/dvb/atsc$ I have been searching for a solution to this all weekend without any luck. I remember reading that ehci_hcd is the problem, but I'm not sure if disabling it is a good idea. I don't want my USB ports to go back to v1.1. Thank you!

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Less than 50 Lines of Code to Create a Java Palette in NetBeans

    - by Geertjan
    Want to drag and drop Java code snippets into the palette, in the same way as can be done for HTML files? If so, create a new module and add a class with the content below and you're done. You'll be able to select a piece of Java code, drag it into the palette (Ctrl-Shift-8 to open it), where you'll be able to set a name, tooltip, and icons for the snippet, and then you'll be able to drag it out of the palette into any Java files you like. The palette content is persisted across restarts of the IDE. package org.netbeans.modules.javasourcefilepalette; import java.io.IOException; import javax.swing.Action; import org.netbeans.api.editor.mimelookup.MimeRegistration; import org.netbeans.spi.palette.DragAndDropHandler; import org.netbeans.spi.palette.PaletteActions; import org.netbeans.spi.palette.PaletteController; import org.netbeans.spi.palette.PaletteFactory; import org.openide.util.Exceptions; import org.openide.util.Lookup; import org.openide.util.datatransfer.ExTransferable; public class JavaSourceFileLayerPaletteFactory { private static PaletteController palette = null; @MimeRegistration(mimeType = "text/x-java", service = PaletteController.class) public static PaletteController createPalette() { try { if (null == palette) { return PaletteFactory.createPalette( //Folder: "JavaPalette", //Palette Actions: new PaletteActions() { @Override public Action[] getImportActions() {return null;} @Override public Action[] getCustomPaletteActions() {return null;} @Override public Action[] getCustomCategoryActions(Lookup lkp) {return null;} @Override public Action[] getCustomItemActions(Lookup lkp) {return null;} @Override public Action getPreferredAction(Lookup lkp) {return null;} }, //Palette Filter: null, //Drag and Drop Handler: new DragAndDropHandler(true) { @Override public void customize(ExTransferable et, Lookup lkp) {} }); } } catch (IOException ex) { Exceptions.printStackTrace(ex); } return null; } } In my layer file, I have this content: <folder name="JavaPalette"> <folder name="Snippets"/> </folder> That's all. Run the module. Open a Java source file and the palette will automatically open. Drag some code into the palette and a dialog will pop up asking for some details like display name and icons. Then the snippet will be in the palette and you'll be able to drag and drop it anywhere you like. Use the Palette Manager, which is automatically integrated, to add new categories and show/hide palette items. Related blog entry, for which the above is a big simplification: Drag/Drop Snippets into Palette .

    Read the article

  • SOA, Cloud & Service Technology Symposium 2012 London

    - by JuergenKress
    Registration Is Now Open With Special Pricing For Oracle Promotional Discount For Exclusive Oracle Discount, Enter Promo Code: Djmxz370 OVERVIEW The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. Keynotes and Speakers Thomas Erl Arcitura Education "SOA, Cloud Computing & Semantic Web Technology: The Sequel - The Era of Intelligent Service Technology" Markus Zirn Oracle "Big Data with CEP and SOA" Clemens Utschig Boehringer Ingelheim Pharma Manas Deb Oracle "The Successful Execution of the SOA and BPM Vision Tim E. Hall Oracle "Community Management: The Next Wave of SOA Governance and API Management" Registration is Now Open with Special Pricing for Oracle Promotional discount for exclusive Oracle discount, Enter Promo Code: DJMXZ370. SPONSORSHIP OPPORTUNITIES The Symposium provides an excellent opportunity to promote your organization in the lead-up to the event, to delegates during the Symposium, and after when the proceedings are made available on the Symposium web site. There are a limited number of premier sponsorship packages available, and a package can be tailored to your needs and budget. Download the Symposium Sponsorship Guide. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,SOA Cloud Service Technology Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • SQLAuthority News – Live Virtual Classroom New Trend in Technology

    - by nupurdave
    This blog post is by Nupur Dave, who is housewife and works from home. Changing times and a super busy lifestyle have rendered most of us powerless when it comes to doing what we love to do. I feel that a man never ceases to learn and his sole aim is to seek knowledge, and keep growing. However, our tight schedules and packed calendars mean that we really have to struggle to take some time out and follow the path towards learning. Like all working professionals with a family to take care of, I hardly found time to pursue my interests. However, it was getting increasingly important for me to upgrade my skills, not only for my personal quest for knowledge but to also substantiate my professional standing. When I came to know about Koenig Live Virtual Classroom from friends, it piqued my interest. I felt like it was the answer to all my concerns. Without wasting a single minute, I contacted Koenig for a demo class. Here are some of the highlights of Koenig LVC which instantly struck a chord in me: Online Training – Koenig offers 1-on-1 Online Training with the instructor at the other end. Doesn’t matter where I am sitting, in my office or at home, I can connect to my trainer from anywhere. Flexible Timings – The most comfortable part is you get to choose the time that suits you best. Economical -  No need to travel a thousand miles, the experts are right here on your computer screen. So no extra cost of travel, lodging and meals. 24X7 Lab Access: This is again a great feature that proved to be very beneficial in gaining a practical understanding of the subject. Powered by a data center, this facility offers students much to look forward to. 300+ Full Time Certified Experts: Be assured that you are learning from the best people in the industry. Customized Courses: Course material and training delivery is completely customized to suit your specific requirements. Official Courseware: The instructor teaches from official courseware of the vendor, depending on which course you have applied for – be it Microsoft, Cisco, Oracle or any other certification. Take Exam from Anywhere: Post completion of your IT training, you can take your certification exam from anywhere. Again, no need to travel a thousand miles to earn certified status. No Pre-Recorded Sessions: For those who still need clarification, it will be a live online classroom with trainers instructing you in real time. So you won’t get any surprises of getting pre-recorded sessions in place of your live instructor. Koenig’s Live Virtual Classroom methodology greatly exceeded my expectations. The instructor was highly skilled and very professional. I had concerns about the quality of AV on the computer screen, and whether I’ll be able to understand each topic in detail. However, the quality of video and sound, and the learning methodology used was impeccable. If you’re also facing time crunch and other commitment issues which are getting in the way of your professional development, LVC is the best solution to learn and grow. To know more about Student Experiences and Feedback of Koenig LVC, you can view their Testimonials. Reference: Nupur Dave (http://blog.sqlauthority.com)Filed under: SQL Authority

    Read the article

  • Something about Property Management or &hellip; the understanding of SharePoint Admins/roles ?!?

    - by Enrique Lima
    When I talk about SharePoint, for some reason it comes to my mind as if it were property management and all the tasks associated with it. So, imagine you have a lot ( a piece of land of sorts), you then decide there is something you want to do with it.  So, you make the choice of having a building built.  Now, in order to go forward with your plan, you need to check what the rules/regulations are.  Has is it been zoned residential, commercial, industrial … you get the idea.  This to me sounds like Governance.  The what am I to do given a defined set of rules. We keep on moving forward based on those rules.  And with this we start the process of building, the building process takes us to survey the land, identify what our boundaries are.  And as we go along we start getting the idea in our head as to what we will do as far as the building goes.  We identify the essentials of the building, basic services and such.  All in all, we plan.  And as with many things we do, we like solid foundations.  What a solid foundation looks like will depend on where and what we build.  The way buildings are built depends in many ways in being able to foresee the potential for natural disasters or to try to leverage the lay of the land.  Sound familiar?  We have done our Requirements Gathering. We have the building in place, we have followed the zoning rules, we have implemented services.  But we need someone to manage the building, now we move on to the human side of the story.  We want to establish a means to normalcy in the building, someone that can be the monitoring agent as to the “what’s going on?” of it.  This person will be tasked with making sure all basic services are functional, that measures are taken if there is an issue and so on.  Enter the Farm Administrator. In a way, we establish an extension of the rules to make sure the building and the apartments/offices build follow a standard set of rules too. Now, in turn you will have people leasing or buying the apartments/offices, they will be the keepers of that space.  So, now we are building sites, we have moved from having the building (farm) ready, to leasing/selling offices/apartments (site collections).  There will be someone assuming responsibility for those offices, that person will authorize or be informed about activities and also who not only gets a code into the building, but perhaps a key to the office.  Enter Site Collection Administrator.  And then perhaps we move on to the person that would be responsible for specifics within the office, for example a Human Resources Manager or Coordinator.  They will have specific control and knowledge about people.  A facilities coordinator, and so on.  I would translate that into Site Administrators. With that said then, we identify the following: Role Name Responsibility (but not limited to) Farm Administrator Infrastructure Site Collection Admin Policies for Content, Hierarchy, Recycle Bin, Security and Access Site Owner (Site Admin) Security and Access, Training, Guidance, Manage Templates All in all there are different levels of responsibility to be handled, but it is very important to understand what they are and what they mean. Here is a link to very well laid out explanation on this … http://www.endusersharepoint.com/2009/08/11/site-managers-and-end-user-expectations-roles-and-responsibilities/

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • Questions about identifying the components in MVC

    - by luiscubal
    I'm currently developing an client-server application in node.js, Express, mustache and MySQL. However, I believe this question should be mostly language and framework agnostic. This is the first time I'm doing a real MVC application and I'm having trouble deciding exactly what means each component. (I've done web applications that could perhaps be called MVC before, but I wouldn't confidently refer to them as such) I have a server.js that ties the whole application together. It does initialization of all other components (including the database connection, and what I think are the "models" and the "views"), receiving HTTP requests and deciding which "views" to use. Does this mean that my server.js file is the controller? Or am I mixing code that doesn't belong there? What components should I break the server.js file into? Some examples of code that's in the server.js file: var connection = mysql.createConnection({ host : 'localhost', user : 'root', password : 'sqlrevenge', database : 'blog' }); //... app.get("/login", function (req, res) { //Function handles a GET request for login forms if (process.env.NODE_ENV == 'DEVELOPMENT') { mu.clearCache(); } session.session_from_request(connection, req, function (err, session) { if (err) { console.log('index.js session error', err); session = null; } login_view.html(res, user_model, post_model, session, mu); //I named my view functions "html" for the case I might want to add other output types (such as a JSON API), or should I opt for completely separate views then? }); }); I have another file that belongs named session.js. It receives a cookies object, reads the stored data to decide if it's a valid user session or not. It also includes a function named login that does change the value of cookies. First, I thought it would be part of the controller, since it kind of dealt with user input and supplied data to the models. Then, I thought that maybe it was a model since it dealt with the application data/database and the data it supplies is used by views. Now, I'm even wondering if it could be considered a View, since it outputs data (cookies are part of HTTP headers, which are output)

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • I'm a C Programmer, but I can't find a comfortable environment to work in

    - by Jesse Brands
    Hello everyone, Last time I asked a question, I was having issues dealing with Java which I had to do for a course work. I generally use C for my development work - especially personal projects - and I've grown up in what is pretty much a Linux/UNIX world. In this world, it was easy to use C, you had your C compiler (GCC is excellent in that regard) and a wealth of tools such as the command line and vi/emacs/whatever-you-got. However, that was all that I really liked about Linux/UNIX. It really fitted well with the C language; nowadays, I'm somewhat forced into Windows/Mac OS X for most of my work. C seems poorly supported on a mac for starters, there's no GUI API to use and pretty much you get forced into Obj-C. This is not a problem, I like Objective-C, but it's another language I have to learn. Now coming to Windows. Why does everything about Windows Development try to scare me away? It's basically come down to: USE C# AND .NET OR DIE. I don't like C#, I like C, they are fundamentally different. Yet when I make a Windows Forms application in MSVC++ (I know that's not C), I get a main function riddled with weird things I've never heard of before, along with a poor, barely-compliant C/C++ compiler. What am I to do when I just want to program in C, make applications that look and feel like native Windows applications (I am a sucker for aesthetics, and I'm not looking to make something cross-platform. I just want it to work on Windows, and look as native as possible.). C++ is a fine alternative, but it really looks like the only way to make a decent, native feeling Windows application, is to use C#. Am I missing something here? I'd rather not use CYGWIN. Like I said, I want people to install the program, and it should just work out of the box on Windows 7. Program in question involves a Media Player, if anyone is curious what I'm targetting at. Anyone who had the same experiences who can help me out? How can I code something in ANSI C and still have a native feel?

    Read the article

  • Java EE 8 update

    - by delabassee
    Planning for Java EE 8 is now well underway. As you know, a few weeks ago, we conducted a three part Java EE 8 Community Survey (you can find the final summary here). The data gathered have been very influential for the next steps. You can now expect over the coming weeks and months to see updates on the various specifications that compose the Java EE platform. Some Specification Leads are busy gathering additional feedback regarding what they should focus their efforts on (e.g. CDI 2 survey). Other Specification Leads have already publicly exposed what they think should be one of the focus for the evolution of the specification they lead.  For example, adding Server-Senet Events (SSE) support in JAX-RS is being discussed here and adding MVC support is being discussed here. Please remember that the fact we are now discussing any feature does not insure that it will be included in the proposal, nor in any particular update to Java EE. We can expect additional enhancements, changes and evolutions as we get closer to the finalisation of the different specifications... and there is still a long way to go with these specification proposals! Linda DeMichiel, Java EE Co-Specification Lead, has recently posted a draft proposal for the Java EE 8 Platform specification. Linda's goal is to recruit people and companies supporting this proposal before submitting it to the JCP.  This draft proposal is very interesting reading as it contains relevant information on the plans for Java EE 8 such as : The themes: Support for the latest web standards (eg. HTTP 2.0)  Continue to work on ease of development Improve the infrastructure for cloud support Alignment with Java SE 8 New JSRs to be added to the platform: J-Cache Java API for JSON Binding Java Configuration Plans for the Web Profile Plans on technologies to prune in Java EE 8, ... So if you haven't done it yet, I really encourage you to read the Java EE 8 draft proposal! Our goal for the Java EE 8 specification is for it to be finalized in the second half of 2016. It is important to note that we are in the early days of Java EE 8 and at this stage everything (themes, content, timing, etc.) is preliminary. Everything still needs to be discussed, challenged and agreed within the different Java Community Process (JCP) Experts Groups (EGs). Some EGs that still need to be formed! It could also means that the roadmap will have to be adjusted to follow the progress being made in the different EGs. This is also a good occasion to remind you that participation within those upcoming JCP Experts Groups is encouraged. Contributing in an EG is an effective lever to influence what Java EE 8 will become! Finally, as things get more concrete, we will share details on how to engage in the different Java EE 8 related Adopt-a-JSR initiatives, another way to contribute. You can also read other posts related to Java EE 8, here at The Aquarium blog. Just look for articles with the 'javaee8' tag.

    Read the article

  • Making A Photo-Sharing App For Android In Eclipse [on hold]

    - by user3694394
    I've only just started developing mobile apps, which is something that I've been wanting to learn for a while now. I'm from an indie games studio, making PC games for around the last 3 years, and I finally decided to move into android app development. The only problem I'm having is that I don't know where to start. The project which I'm aiming to create will be something similar to Instagram, basically a photo-sharing app which allows users to take new photos, or pull them from their device, and add filters to them, before posting them. I have a rough idea of how I could go about doing this, but I need pointing towards any tutorials available for each specific step. So, here's my idea: Create a UI in eclipse (this wont be a problem for me, I should be able to do this fine through xml files) Setup a server-side database to store all user info and uploaded images (the images will need to be converted into byte arrays, and I have no idea how to do this through a database). My best idea would be to use a MySQL database to do this. Add user interactions (likes, favourites, reposts, etc.). This would, again, have to be stored in the database (or, at least, i think it would). Add the ability to take new photos using the phone's camera (I can probably do this anyway, using the Camera API). Add the ability to pull existing photos from the device (again, pretty simple to do). Add the ability to add filters to any photos (I had a look around, and there are some repos and resources which allow you to do this, but they're mainly for iOS development). Add facebook/twitter integration (possibly) to allow phots to be shared to other social networks. Create a news feed which shows users all of the latest photos from their friends, and allows them to post their own images to it. Give all registered users their own wall/page which has their latest posts/images displayed on it. Add the ability to allow users to follow other users, and display their followed users posts on their news feed. Yep. It's not going to be easy, and I don't even know if it's possible for me to do alone in Eclipse. However, this is the plan, and I'm going to do my best to learn everything I need to know to do this successfully. My actual question would be how should I start doing this- where do I begin learning how to do all of this? I've had a look at snapify, which can be edited via Parse, but I won't be spending hundreds of dollars (since I'm 15 and just don't have the available funds) on software. I have extensive knowledge of Java (again, I've been making games for around 3 years, mainly in Java), and various scripting languages. So, hopefully, this will be of some use here. Thanks in advance, Josh.

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • Oracle At QCon SF 2012

    - by Cassandra Clark - OTN
    Oracle Technology Network is a Platinum sponsor at QCon San Francisco.  (qconsf.com).  Don’t miss these great developer focused sessions: Shay ShmeltzerHow we simplified Web, Mobile and Cloud development for our own developers? - the Oracle StoryOver the past several years, Oracle has beendeveloping a new set of enterprise applications in what is probably one of thelargest Java based development project in the world. How do you take 3000 developers and make them productive? How do you insure the delivery of cutting edge UIs for both Mobile and Web channels? How do you enable Cloud baseddevelopment and deployment?  Come and learn how we did it at Oracle, and see how the same technologies and methodologies can apply to your development efforts. Dan SmithProject Lambda in Java 8Java SE 8 will include major enhancements to the Java Programming Language and its core libraries.  This suite of new features, known as Project Lambda in the OpenJDK community, includes lambda expressions, default methods, and parallel collections (and much more!).  The result will be a next-generation Java programming experience with more flexibility and better abstractions.   This talk will introduce the new Java features and offer a behind-the-scenes view of how they evolved and why they work the way that they do. Arun GuptaJSR 356: Building HTML5 WebSocket Applications in JavaThe family of HTML5 technologies has pushed the pendulum away from rich client technologies and toward ever-more-capable Web clients running on today’s browsers. In particular, WebSocket brings new opportunities for efficient peer-to-peer communication, providing the basis for a new generation of interactive and “live” Web applications. This session examines the efforts under way to support WebSocket in the Java programming model, from its base-level integration in the Java Servlet and Java EE containers to a new, easy-to-use API and toolset that are destined to become part of the standard Java platform. The full conference schedule is here: http://qconsf.com/sf2012/schedule/wednesday.jsp But wait, there’s more!  At the Oracle booth, we’ll also be covering: ·         Oracle ADF Mobile·         Oracle Developer Cloud Service·         Oracle ADF Essentials·         NetBeans Project Easel Lastly we’ll share the results of a short cloud survey at QConSF ater this week.  If you attended this year's Oracle OpenWorld and JavaOne conferences, it would be hard not to notice that Oracle is clearly "all-in" when it comes to the Cloud.  With Cloud computing being such a hot topic on many OTN members' minds, we'd like to know what you're doing in the cloud and invite you to take this short cloud survey.

    Read the article

  • Graphics trouble after resuming from hibernate or suspend

    - by Voyagerfan5761
    I have a Dell Inspiron 2650 (with NVidia graphics, using nouveau drivers) that I'm using to try out Ubuntu. It's all great, except that Hibernate and Suspend aren't usable. Yes, I know that questions about power-save issues are rampant in the Linux support universe, but it seems that every time I find a solution it's for a very specific hardware combination and doesn't apply to me. So anyway, here goes. When I resume from either power-saving mode, I'll get graphics problems anywhere on the range from a few scattered random-colored pixels that won't change; all the way to full-screen patterns that don't change as I move the mouse, hit keys on the keyboard, or even bring up the shutdown dialog using the power button. Those full-screen issues (which may involve stripes with random pixels, partial black screen, or both) always end in me forcing the machine to shut down by holding the power button. I haven't done much testing yet to determine what severity level is most commonly associated with each mode, but I do avoid using either power-save option because of these issues. I'll add info on my hardware as I can gather it (no home internet connection, and this laptop is tethered to my desk by a dead battery and casing degradation). Please feel free to request something specific in the question comments. Hardware Info See this hardinfo report for my system's hardware configuration. (No, my username is not "myuser"; I sanitized hardinfo's output before publishing it.) Screenshots These screenshots are from a relatively mild occurrence, which happened after the second hibernation I took that session. The first one worked great, though I used the wireless card and Firefox heavily between the two hibernation attempts. Take a look at what happened when I opened my home directory in Nautilus and scrolled it: See below for the situations I've tested so far. The real trouble comes when the machine resumes to an unusable state; in such cases I can't even unlock the screen or properly reboot, much less take a screenshot. I have a hunch that putting a CD in the drive will cause such major failures, and I will try that at some point; see related question. Situations Tested Maverick (10.10) Suspend Seems to suspend nicely with nothing running Seems to suspend nicely with flash drive plugged in On resume from suspend with no flash drive, Terminal and gedit running: Funky graphics on top of log output, then blank screen with pixelated cursor; no response to power button (normally will shutdown 60 seconds later) Hibernate Seems to hibernate nicely with nothing running Seems to hibernate nicely with a few apps (Terminal, Mouse preferences) running Seems to not hibernate when flash drive plugged in Seems to not hibernate when System Monitor is running Have encountered failed hibernation (after several hours and one successful hibernate/thaw cycle) with no external media connected and no programs running except normal background stuff Natty LiveCD (11.04_2010-12-22) When I tested it, Natty wouldn't stay logged in. It played part of the login sound and then [ OK ] appeared in the top right corner (white-on-black terminal text) for a few seconds. Then it kicked me back to the Unlock screen. It did that four times before I gave up and just tested suspend from the Unlock screen. Suspend Resumed to vertical gray and black lines 2px (?) wide, then shifted to vertical "jail bars" of black over a black screen with above-described random pixels and mouse pointer. No apparent response to input from mouse (clicking randomly). Keyboard and touchpad unrecognized.

    Read the article

  • Call For Papers Tips and Tricks

    - by speakjava
    This year's JavaOne session review has just been completed and by now everyone who submitted papers should know whether they were successful or not.  I had the pleasure again this year of leading the review of the 'JavaFX and Rich User Experiences' track.  I thought it would be useful to write up a few comments to help people in future when submitting session proposals, not just for JavaOne, but for any of the many developer conferences that run around the world throughout the year.  This also draws on conversations I recently had with various Java User Group leaders at the Oracle User Group summit in Riga.  Many of these leaders run some of the biggest and most successful Java conferences in Europe. Try to think of a title which will sound interesting.  For example, "Experiences of performance tuning embedded Java for an ARM architecture based single board computer" probably isn't going to get as much attention as "Do you like coffee with your dessert? Java on the Raspberry Pi".  When thinking of the subject and title for your talk try to steer clear of sessions that might be too generic (and so get lost in a group of similar sessions).  Introductory talks are great when the audience is new to a subject, but beware of providing sessions that are too basic when the technology has been around for a while and there are lots of tutorials already available on the web. JavaOne, like many other conferences has a number of fields that need to be filled in when submitting a paper.  Many of these are selected from pull-down lists (like which track the session is applicable to).  Check these lists carefully.  A number of sessions we had needed to be shuffled between tracks when it was thought that the one selected was not appropriate.  We didn't count this against any sessions, but it's always a good idea to try and get the right one from the start, just in case. JavaOne, again like many other conferences, has two fields that describe the session being submitted: abstract and summary.  These are the most critical to a successful submission.  The two fields have different names and that is significant; a frequent mistake people make is to write an abstract for a session and then duplicate it for the summary.  The abstract (at least in the case of JavaOne) is what gets printed in the show guide and is typically what will be used by attendees when deciding what sessions to attend.  This is where you need to sell your session, not just to the reviewers, but also the people who you want in your audience.  Submitting a one line abstract (unless it's a really good one line) is not usually enough to decide whether this is worth investing an hour of conference time.  The abstract typically has a limit of a few hundred characters.  Try to use as many of them as possible to get as much information about your session across.  The summary should be different from the abstract (and don't leave it blank as some people do).  This field is where you can give the reviewers more detail about things like the structure of the talk, possible demonstrations and so on.  As a reviewer I look to this section to help me decide whether the hard-sell of the title and abstract will actually be reflected in the final content.  Try to make this comprehensive, but don't make it excessively long.  When you have to review possibly hundreds of sessions a certain level of conciseness can make life easier for reviewers and help the cause of your session. If you've not made many submissions for talks in the past, or if this is your first, try to give reviewers places to find background on you as a presenter.  Having an active blog and Twitter handle can also help reviewers if they're not sure what your level of expertise is.  Many call-for-papers have places for you to include this type of information.  It's always good to have new and original presenters and presentations for conferences.  Hopefully these tips will help you be successful when you answer the next call-for-papers.

    Read the article

  • Should a server "be lenient" in what it accepts and "discard faulty input silently"?

    - by romkyns
    I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a "be lenient" comment upvoted 137 times (as of today). In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept will lead to this. The second part of the maxim is "discard faulty input silently, without returning an error message unless this is required by the specification", and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean. So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean? The original question said "program", and I take everyone's point about that. It can make sense for programs to be lenient. What I really meant, however, is APIs: interfaces exposed to other programs, rather than people. HTTP is an example. The protocol is an interface that only other programs use. People never directly provide the dates that go into headers like "If-Modified-Since". So, the question is: should the server implementing a standard be lenient and allow dates in several other formats, in addition to the one that's actually required by the standard? I believe the "be lenient" is supposed to apply to this situation, rather than human interfaces. If the server is lenient, it might seem like an overall improvement, but I think in practice it only leads to client implementations that end up relying on the leniency and thus failing to work with another server that's lenient in slightly different ways. So, should a server exposing some API be lenient or is that a very bad idea? Now onto lenient handling of user input. Consider YouTrack (a bug tracking software). It uses a language for text entry that is reminiscent of Markdown. Except that it's "lenient". For example, writing - foo - bar - baz is not a documented way of creating a bulleted list, and yet it worked. Consequently, it ended up being used a lot throughout our internal bugtracker. Next version comes out, and this lenient feature starts working slightly differently, breaking a bunch of lists that (mis)used this (non)feature. The documented way to create bulleted lists still works, of course. So, should my software be lenient in what user inputs it accepts?

    Read the article

  • Java: How to manage UDP client-server state

    - by user92947
    I am trying to write a Java application that works similar to MapReduce. There is a server and several workers. Workers may come and go as they please and the membership to the group has a soft-state. To become a part of the group, the worker must send a UDP datagram to the server, but to continue to be part of the group, the worker must send the UDP datagram to the server every 5 minutes. In order to accommodate temporary errors, a worker is allowed to miss as many as two consecutive periodic UDP datagrams. So, the server must keep track of the current set of workers as well as the last time each worker had sent a UDP datagram. I've implemented a class called WorkerListener that implements Runnable and listens to UDP datagrams on a particular UDP port. Now, to keep track of active workers, this class may maintain a HashSet (or HashMap). When a datagram is received, the server may query the HashSet to check if it is a new member. If so, it can add the new worker to the group by adding an entry into the HashSet. If not, it must reset a "timer" for the worker, noting that it has just heard from the corresponding worker. I'm using the word timer in a generic sense. It doesn't have to be a clock of sorts. Perhaps this could also be implemented using int or long variables. Also, the server must run a thread that continuously monitors the timers for the workers to see that a client that times out on two consecutive datagram intervals, it is removed from the HashSet. I don't want to do this in the WorkerListener thread because it would be blocking on the UDP datagram receive() function. If I create a separate thread to monitor the worker HashSet, it would need to be a different class, perhaps WorkerRegistrar. I must share the HashSet with that thread. Mutual exclusion must also be implemented, then. My question is, what is the best way to do this? Pointers to some sample implementation would be great. I want to use the barebones JDK implementation, and not some fancy state maintenance API that takes care of everything, because I want this to be a useful demonstration for a class that I am teaching. Thanks

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >