Search Results

Search found 14108 results on 565 pages for 'language detection'.

Page 456/565 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • Tap into MySQL's Amazing Performance Results with the Performance Tuning Course

    - by Antoinette O'Sullivan
    Want to leverage the high-speed load utilities, distinctive memory caches, full text indexes, and other performance-enhancing mechanisms that MySQL offers to fuel today's critical business systems. The authentic MySQL Performance Tuning course, in 4 days, teaches you to evaluate the MySQL architecture, learn to use the tools, configure the database for performance, tune application and SQL code, tune the server, examine the storage engines, assess the application architecture, and learn general tuning concepts. You can take this course in one the following three ways: Training-on-Demand: Access the streaming video, instructor delivery of this course from your own desk, at your own pace. Book time for hands-on practice when it suits you. Live-Virtual Class: Take this instructor-led class live from your own desk. With 700 events on the schedule you are sure to find a time and date to suit you! In-Class: Travel to a classroom to take this class. A sample of events on the schedule are as follows.  Location  Date  Delivery Language  Hamburg, Germany  22 October 2012  German  Prague, Czech Republic  1 October 2012  Czech  Warsaw, Poland  3 December 2012  Polish  London, England  19 November 2012  English  Rome, Italy  23 October 2012  Italian Lisbon, Portugal  6 November 2012  European Portugese  Aix en Provence, France  4 September 2012   French  Strasbourg, France 16 October 2012   French  Nieuwegein, Netherlands 26 November 2012   Dutch  Madrid, Spain 17 December 2012   Spanish  Mechelen, Belgium  1 October 2012  English  Riga, Latvia  10 December 2012  Latvian  Petaling Jaya, Malaysia  10 September 2012 English   Edmonton, Canada 10 December 2012   English  Vancouver, Canada 10 December 2012   English  Ottawa, Canada 26 November 2012   English  Toronto, Canada 26 November 2012   English  Montreal, Canada 26 November 2012   English  Mexico City, Mexico 10 September 2012   Spanish  Sao Paolo, Brazil 26 November 2012  Brazilian Portugese   Tokyo, Japan 19 November 2012   Japanese  Tokyo, Japan  19 November 2012  Japanese For further information on this class, or to register your interest in additional events, go to the Oracle University Portal: http://oracle.com/education/mysql

    Read the article

  • Executing server validators first before OnClientClick Javascript confirm/alert

    - by kaushalparik27
    I got to answer a simple question over community forums. Consider this: Suppose you are developing a webpage with few input controls and a submit button. You have placed some server validator controls like RequiredFieldValidator to validate the inputs entered by the user. Once user fill-in all the details and try to submit the page via button click you want to alert/confirm the submission like "Are you sure to modify above details?". You will consider to use javascript on click of the button.Everything seems simple and you are almost done. BUT, when you run the page; you will see that Javascript alert/confirm box is executing first before server validators try to validate the input controls! Well, this is expected behaviour. BUT, this is not you want. Then? The simple answer is: Call Page_ClientValidate() in javascript where you are alerting the submission. Below is the javascript example:    <script type="text/javascript" language="javascript">        function ValidateAllValidationGroups() {            if (Page_ClientValidate()) {                return confirm("Are you sure to modify above details?");            }        }    </script>Page_ClientValidate() function tests for all server validators and return bool value depends on whether the page meets defined validation criteria or not. In above example, confirm alert will only popup up if Page_ClientValidate() returns true (if all validations satisfy). You can also specify ValidationGroup inside this function as Page_ClientValidate('ValidationGroup1') to only validate a specific group of validation in your page.        function ValidateSpecificValidationGroup() {            if (Page_ClientValidate('ValidationGroup1')) {                return confirm("Are you sure to modify above details?");            }        }I have attached a sample example with this post here demonstrating both above cases. Hope it helps./.

    Read the article

  • Answers to Your Common Oracle Database Lifecycle Management Questions

    - by Scott McNeil
    We recently ran a live webcast on Strategies for Managing Oracle Database's Lifecycle. There were tons of questions from our audience that we simply could not get to during the hour long presentation. Below are some of those questions along with their answers. Enjoy! Question: In the webcast the presenter talked about “gold” configuration standards, for those who want to use this technique, could you recommend a best practice to consider or follow? How do I get started? Answer:Gold configuration standardization is a quick and easy way to improve availability through consistency. Start by choosing a reference database and saving the configuration to the Oracle Enterprise Manager repository using the Save Configuration feature. Next create a comparison template using the Oracle provided template as a starting point and modify the ignored properties to eliminate expected differences in your environment. Finally create a comparison specification using the comparison template you created plus your saved gold configuration and schedule it to run on a regular basis. Don’t forget to fill in the email addresses of those you want to notify upon drift detection. Watch the database configuration management demo to learn more. Question: Can Oracle Lifecycle Management Pack for Database help with patching an Oracle Real Application Cluster (RAC) environment? Answer: Yes, Oracle Enterprise Manager supports both parallel and rolling patch application of Oracle Real Application Clusters. The use of rolling patching is recommended as there is no downtime involved. For more details watch this demo. Question: What are some of the things administrators can do to control configuration drift? Why is it important? Answer:Configuration drift is one of the main causes of instability and downtime of applications. Oracle Enterprise Manager makes it easy to manage and control drift using scheduled configuration comparisons combined with comparison templates. Question: Does Oracle Enterprise Manager 12c Release 2 offer an incremental update feature for "gold" images? For instance, if the source binary has a higher PSU level, what is the best approach to update the existing "gold" image in the software library? Do you have to create a new image or can you just update the original one? Answer:Provisioning Profiles (Gold images) can contain the installation files and database configuration templates. Although it is possible to make some changes to the profile after creation (mainly to configuration), it is normally recommended to simply create a new profile after applying a patch to your reference database. Question: The webcast talked about enforcing in-house standards, does Oracle Enterprise Manager 12c offer verification of your databases and systems to those standards? For example, the initial "gold" image has been massively deployed over time, and there may be some changes to it. How can you do regular checks from Enterprise Manager to ensure the in-house standards are being enforced? Answer:There are really two methods to validate conformity to standards. The first method is to use gold standards which you compare other databases to report unwanted differences. This method uses a new comparison template technology which allows users to ignore known differences (i.e. SID, Start time, etc) which results in a report only showing important or non-conformant differences. This method is quick to setup and configure and recommended for those who want to get started validating compliance quickly. The second method leverages the new compliance framework which allows the creation of specific and robust validations. These compliance rules are grouped into standards which can be assigned to databases quickly and easily. Compliance rules allow for targeted and more sophisticated validation beyond the basic equals operation available in the comparison method. The compliance framework can be used to implement just about any internal or industry standard. The compliance results will track current and historic compliance scores at the overall and individual database targets. When the issue is resolved, the score is automatically affected. Compliance framework is the recommended long term solution for validating compliance using Oracle Enterprise Manager 12c. Check out this demo on database compliance to learn more. Question: If you are using the integration between Oracle Enterprise Manager and My Oracle Support in an "offline" mode, how do you know if you have the latest My Oracle Support metadata? Answer:In Oracle Enterprise Manager 12c Release 2, you now only need to download one zip file containing all of the metadata xmls files. There is no indication that the metadata has changed but you could run a checksum on the file and compare it to the previously downloaded version to see if it has changed. Question: What happens if a patch fails while administrators are applying it to a database or system? Answer:A large portion of Oracle Enterprise Manager's patch automation is the pre-requisite checks that happen to ensure the highest level of confidence the patch will successfully apply. It is recommended you test the patch in a non-production environment and save the patch plan as a template once successful so you can create new plans using the saved template. If you are using the recommended ‘out of place’ patching methodology, there is no urgency because the database is still running as the cloned Oracle home is being patched. Users can address the issue and restart the patch procedure at the point it left off. If you are using 'in place' method, you can address the issue and continue where the procedure left off. Question: Can Oracle Enterprise Manager 12c R2 compare configurations between more than one target at the same time? Answer:Oracle Enterprise Manager 12c can compare any number of target configurations at one time. This is the basis of many important use cases including Configuration Drift Management. These comparisons can also be scheduled on a regular basis and emails notification sent should any differences appear. To learn more about configuration search and compare watch this demo. Question: How is data comparison done since changes are taking place in a live production system? Answer:There are many things to keep in mind when using the data comparison feature (as part of the Change Management ability to compare table data). It was primarily intended to be used for maintaining consistency of important but relatively static data. For example, application seed data and application setup configuration. This data does not change often but is critical when testing an application to ensure results are consistent with production. It is not recommended to use data comparison on highly dynamic data like transactional tables or very large tables. Question: Which versions of Oracle Database can be monitored through Oracle Enterprise Manager 12c? Answer:Oracle Database versions: 9.2.0.8, 10.1.0.5, 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, 11.2.0.2, 11.2.0.3. Watch the On-Demand Webcast Stay Connected: Twitter | Facebook | YouTube | Linkedin | NewsletterDownload the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • Spotlight on an office - Dublin!

    - by Tim Koekkoek
    In this third instalment of our monthly topic ‘Spotlight on an Office’, we visit Dublin, Ireland Oracle has 5 offices in Dublin all in the EastPoint Business Park close to Dublin City centre. In Dublin there are currently 1,000 people working for Oracle. You’ll find, among others, a large part of OracleDirect, our inside sales organization, part of our EMEA Finance organization and employees from Product and Systems Development who work on the heart of Oracle’s products. Facilities EastPoint Business Park is located next to the Irish Financial Service Centre (IFSC) and is only one train stop away from Dublin city centre. This seafront business park and nearby amenities cater for staff’s needs, which include a Sandwich Bar, a Coffee Shop and a small Convenience Store and Newsagent. Moreover there is a Physical Therapy Clinic and Beauty Salon onsite, Pilates and Boot Camp classes, weekly WeightWatcher Classes, five football / tennis courts and an outdoor chess board. When the sun is shining On sunny days comfy, colourful beanbags are spread throughout the park to relax and every Wednesday there is the Irish Village Market providing staff with a variety of delicious gourmet foods from all over the world. Friday afternoons after work are often used by Oracle employees to start the weekend socializing in The Epicenter Cafe Bar & Venue. In the office In the Oracle offices, you have an open floor design and an open door policy which makes it really easy to walk over to your colleagues or a manager to discuss your projects and keep informed with what is going on. This way you also have a great chance to bond with your colleagues. In two of the Oracle buildings there are subsidized canteens especially for Oracle employees with chefs cooking something special everyday! One of the best things about Oracle in Dublin is that it is really multinational. Currently there are more than 25 languages spoken by Oracle employees. So you will work with colleagues from all around the globe, every day, which makes it a really interesting and exciting experience. Sport & Social There is also a dedicated Sport and Social Club, Oraclub. They organize many sport and social activities. It doesn’t matter which sport is your favourite, Oraclub caters for like-minded individuals and makes sure you can play or watch your favourite sport. Furthermore, Oraclub organizes exhibition matches to get you acquainted with some other sports. Last year the Gaelic Warriors (A Wheelchair Rugby club) held an exhibition match. Oraclub also offer Oracle parties, language courses and offer discounts on many events! So whether you want to go to a Robbie Williams concert, an exhibition of Van Gogh or a match of the Irish Rugby team, Oraclub is there for everyone! There are also plenty of possibilities to get involved in volunteering. Want to know more about the current vacancies in Dublin? Check https://campus.oracle.com for all of our vacancies.

    Read the article

  • Can't fix broken packages

    - by AWE
    I am too dumb but determined to use Ubuntu that I paid a professional to install it for me (dualboot 11.10 with Win7). When I came home I got a lot of things from the software center. Skype did not have a download button so I googled it and Ubuntu help told me to do this: sudo add-apt-repository "deb http://archive.canonical.com/ $(lsb_release -sc) partner" and then this: sudo apt-get update && sudo apt-get install skype The terminal told me "that this is potentially harmful..." but I thought it was Ubuntu language meaning "are you sure?" Now the computer is mute. Items cannot be installed or removed until the package catalog is repaired, so I want to repair it but the package operation fails. "sudo aptitude -f install" - command not found Synaptic package manager tells me that I have two broken packages, libc6 and libc6-dev so I do this: sudo apt-get update && sudo apt-get upgrade which tells me to do this: sudo apt-get -f install that ends up like this: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) When fixing broken packages in synaptic package manager I get this: Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. E: Sub-process /usr/bin/dpkg returned an error code (2) A package failed to install. Trying to recover: dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. I want to become a linux geek but it is harder than I thought. Please help!

    Read the article

  • Oracle Applications Strategy Day

    - by Oracle Aplicaciones
    Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;} Oracle y ESIC celebraron el pasado 8 de Noviembre la última edición del Roadshow Oracle Applications Strategy Day, que visitó las ciudades de Barcelona, Madrid, Valencia y Sevilla, en colaboración con nuestros partners: Arin Innovation, GFI, Golive, Neteris, Oracle+Cerca, Qualita, SDG Consulting, Steltix, Steria, Tactic, Vass. En el encuentro se evaluó el impacto de los cambios en el negocio, el aumento de la volatilidad de la información y las últimas tecnologías en el marco actual. Con la exclusiva modalidad de ponencias + coloquios + asesorías individuales, todos los asistentes dispusieron de la posibilidad de compartir experiencias y mejores prácticas de la mano de expertos del sector así como con los asistentes al encuentro. A través de los siguientes links podrá acceder a las presentaciones. - La Compañía del Futuro: Nuevas tecnologías y su integración con el Marketing y la Estrategia corporativa. Profesor D. Javier G. Recuenco. - Estrategia de Aplicaciones para PYMES. D. Ricardo Martinez, Director de Aplicaciones para el midsize market Video resumen del evento: 

    Read the article

  • What type of interview questions should you ask for "legacy" programmers?

    - by Marcus Swope
    We have recently been receiving lots of applicants for our open developer positions from people who I like to refer to as "legacy" programmers. I don't like the term "old" because it seems a little prejudiced (especially to HR!) and it doesn't accurately reflect what I mean. We are a company that does primarily .NET development using TDD in an Agile environment, we use Git as a source control system, we make heavy use of OSS tools and projects and we contribute to them as well, we have a strong bias towards adhering to strong Object-Oriented principles, SOLID, etc, etc, etc... Now, the normal list of questions that we ask doesn't really seem to apply to applicants that are fresh out of school, nor does it seem to apply to these "legacy" programmers. Here is how I (loosely) define a "legacy" programmer. Spent a significant amount of their career working almost exclusively with Assembly/Machine Languages. Primary accomplishments include work done with TANDEM systems. Has extensive experience with technologies like FoxPro and ColdFusion It's not that we somehow think that what we do is "better" than what they do, on the contrary, we respect these types of applicants and we are scared that we may be missing a good candidate. It is just very difficult to get a good read on someone who is essentially speaking a different language than you. To someone like this, it seems a little strange to ask a question like: What is the difference between an abstract class and an interface? Because, I would think that they would almost never know the answer or even what I'm talking about. However, I don't want to eliminate someone who could be a very good candidate in their own right and could be able to eventually learn the stuff that we do. But, I also don't want to just ask a bunch of behavioral questions, because I want to know about their technical background as well. Am I being too naive? Should "legacy" programmers like this already know about things like TDD, source control strategies, and best practices for object-oriented programming? If not, what questions should we ask to get a good representation about whether or not they are still able to learn them and be able to keep up in our fast-paced environment? EDIT: I'm not concerned with whether or not applicants that meet these criteria are in general capable or incapable, as I have already stated that I believe that they can be 100% capable. I am more interested in figuring out how to evaluate their talents, as I am having a hard time figuring out how to determine if they are an A+ "legacy" programmer or if they are a D- "legacy" programmer. I've worked with both.

    Read the article

  • 2010 is gone and Welcome 2011

    - by anirudha
    last days i spent my week @ firozabadthe town is much small and near to agraso i never forget to see the taj mahal and red fort their even it’s first chance to see them.i make a plan that i go to Agra last Saturday. firstly i go to red fort and i talking with many foreigner and they love to talking with me because their is only one man who with with them who is their GUIDE a person like a  book they never can talk with you but tell you about everything of the location because you buy them. their are many person come from various country such as German , Japan,  Russ , Italy and many other. their is no problem to talk with them perhaps they happen with talk to me. when i completely watch the Red fort at least i see a girl who are look like a foreigner. i talk themselves where they come from they tell me Francewhen i go elsewhere i thing to propose them to be  a friend of mine. i never propose any girl for friendship with me even in school and college. so i propose them to be a friend of mine.  they accept it i put the email ID in their hand whenever they gone. but i still not get their mail. 2ndly i go to Taj mahal the taj experience is not so good i spent 3 or 4 hours in rush. i found their is no security even their are many army force. they all person are too slow to work. they spent 10 minute to check  a person for security . their hands work very slow just like a low configuration computer. i talk many person their too. i talk to a person who tell themselves Jacob and they from Chicago. they speak very fast and i not know what they tell in speech. a another problem i got with some Chinese person. when i talking with them that i found they speak only Chinese language. Wish you a very very happy new year.

    Read the article

  • June IOUG events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Independent Oracle User Group (IOUG) Regional Events: June 11-12, 2012 – Broomfield, CO 2-Day Seminar- “ High Performance PL/SQL & Oracle Database 11g New Features” Steven Feuerstein, generally considered the world’s leading PL/SQL expert, will be presenting his all-new, 2-day, “Higher Performance PL/SQL and Oracle 11g PL/SQL New Features” seminar on June 11 & 12 at Level 3 Communications in Broomfield, Colorado.  This will be Steven’s first Denver seminar in almost 4  years.  Who knows when he will offer another? http://www.rmoug.org/ June 14, 2012 – Ottawa, Ontario Pythian’s Gwen Shapira puts on 3 great presentations focused on NoSQL, making OLTP run fast and Big Data. http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:1317735724699447::NO June 21, 2012 – Calgary, Alberta Big Data and Extreme Analytics Summit http://coug.ab.ca/ June 22, 2012 – Westborough, MA 10 Things You Probably Did Not Know? With Tom Kyte PL/SQL turns 23 years old this year. It was first introduced in 1988 with Oracle6 Database. This session looks at five technical things about PL/SQL you probably did not know: under-the-covers features that make PL/SQL quite simply the most efficient language with which to process data in the database. http://noug.com/  June 28/29, 2012 – Plano, Texas Jonathan Lewis Oracle Performance Seminars The DOUG (DALLAS ORACLE USERS GROUP) has invited SpeakTech to return to Dallas, and they’re bringing Jonathan Lewis! Topics are Beating the Oracle Optimizer – June 28, 2012, Trouble Shooting & Tuning – June 29, 2012 http://www.eventbrite.com/event/3082448687

    Read the article

  • Commit Review Questions

    - by Wes McClure
    Note: in this article when I refer to a commit, I mean the commit you plan to share with the rest of the team, if you have local commits that you plan to amend/combine, I am referring to the final result. In time you will find these easier to do as you develop, however, all of these are valuable before checking in!  The pre commit review is a nice time to polish what might have been several hours of intense work, during which these things were the last things on your mind!  If you are concerned about losing your work in the process of responding to these questions, first do a check-in and amend it as you go (assuming you are using a tool such as git that supports this), rolling the result into one nice commit for everyone else.  Did you review your commit, change by change, with a diff utility? If not, this is a list of reasons why you might want to start! Did you test your changes? If the test is valuable to be automated, is it? If it’s a manual testing scenario, did you at least try the basics manually? Are the additions/changes formatted consistently with the rest of the project? Lots of automated tools can help here, don’t try to manually format the code, that’s a waste of time and as a human you will fail repeatedly. Are these consistent: tabs versus spaces, indentation, spacing, braces, line breaks, etc Resharper is a great example of a tool that can automate this for you (.net) Are naming conventions respected? Did you accidently use abbreviations, unless you have a good reason to use them? Does capitalization match the conventions in the project/language? Are files partitioned? Sometimes we add new code in existing files in a pinch, it’s a good idea to split these out if they don’t belong ie: are new classes defined in new files, if this is something your project values? Is there commented out code? If you are removing an existing feature, get rid of it, that is why we have VCS If it’s not done yet, then why are you checking it in? Perhaps a stash commit (git)? Did you leave debug or unnecessary changes? Do you understand all of the changes? http://geekswithblogs.net/wesm/archive/2012/04/11/programming-doesnrsquot-have-to-be-magic.aspx Are there spelling mistakes? Including your commit message! Is your commit message concise? Is there follow up work? Are there tasks you didn’t write down that you need to follow up with? Are readability or reorganization changes needed? This might be amended into the final commit, or it might be future work that needs added to the backlog. Are there other things your team values that you should review?

    Read the article

  • Oracle Tutor: *** CAUTION to Word .docx Users ***

    - by [email protected]
    Microsoft released a security update KB969604 for Office 2007 (around June 2009) This update causes document variables within Word docx files to be scrambled. This update might still be pushed out via Office 2007 updates DO NOT save files as docx using MS OFFICE 2007 until you apply the MS hotfix # 970942 available here If you are using Windows XP with Office 2003 or Office 2000 and have installed an older Office 2007 compatibility pack, documents saved as docx may also cause the scrambled document variables. Installing the 2007 compatibility pack published on 1/6/2010 (version 4) will prevent the document variables from becoming corrupt. Those on Windows 2000 may not be able to install the latest compatibility pack, or the compatibility pack may not function properly. This situation will hopefully be rectified in the coming months. What is a document variable? Document variables store data inside the document, invisible to the user. The Tutor software uses them when converting the document to HTML and when creating the flowchart, just to name a couple of uses. How will you know if a document's variables are scrambled? The difficulty in diagnosing the issue is that the symptoms can take myriad forms. There isn't a single error message or a single feature that one can point to and say, "test for the problem by doing this." The best clue about the error is seeing any kind of string in an error message that has garbage characters, question marks, xml code snippets, or just nonsense. Such as "Language ?????????????xlr;lwlerkjl could not be found." It is also possible to see the corrupted data in the footers of the Word docs. And, just because the footers look correct does not mean that the document variables are not corrupted. The corruption problem does not occur in every document variable in the document, just some of them. Often it is less than a quarter of them. What is the difference between docx files and doc files? Office 2007 uses Office Open XML formats with .docx and .docm filename extensions. - Docx is an Office Open XML word document. - Docm is a macro enabled Office Open XML document. This means the file structure behind the scenes is quite different from the binary file formats used prior to Office 2007 such as .doc, .dot, .xls, and .ppt. Solution Summary: For Windows XP and Word 2007: Install the hotfix, or save files as *.doc For Windows XP and Word 2000 and 2003: Install the latest compatibility pack or save files as *.doc For Windows 2000 with Word 2000 or 2003, do not use any compatibility pack, save files as *.doc Emily Chorba Principle Product Manager for Oracle Tutor

    Read the article

  • Should CSS be listed on your resume under Languages?

    - by Sandeepan Nath
    I have some doubts like Whether CSS should be put under Languages or not? Although Wikipedia says Cascading Style Sheets (CSS) is a style sheet language ... But do they write CSS under the languages section of the resume, along with PHP, etc? Similarly what about HTML? I have some doubt and I don't want to sound like someone who is not aware of the trends. Just to give an example, currently I have the following languages,frameworks, technologies, etc. listed under the "Technical Expertise" section of my resume - Technical Expertise * Languages - Proficient - PHP 5, Javascript, HTML ?*,CSS ?*,Sass ?*. Beginner - Linux Bash. * Databases - MySQL 5. * Technologies - AJAX. * Frameworks/Libraries - Symfony, jQuery. * CMSes - Wordpress. Although my domain is Web-development/design, I welcome domain-agnostic answers which can provide some generic ideas/reasoning. I have seen, a lot of people messing up these sections (even more serious than my doubts :) ), putting things under wrong sub-headings and thus putting a big question mark on their understanding of those things. I don't know much about XML, Comet Technology etc. Considering those are included too, What things should be put under Languages? E.g. Should CSS be put under Languages? Please give some reasoning to support your views. Where should the others (XML, Comet, cURL etc. ) be put? I welcome some examples of how you put it. Or do you have an additional Keywords section where you write all the unsortables ? Considering a set of standards like W3C standards, etc. do you have a standards sub-heading? I guess I have put the contents of other sections Okay. But do let me know about your ideas and reasoning. After all, I understand there may not be a single answer to this, but let's see what is the trend. Thanks Updates Further, do you mention design patters you have used? Web Services etc.? Where do you mention SOAP, XML etc...

    Read the article

  • An update on using Rosetta Stone: Studio now isn't very useful and is not great value as an add-on option

    - by Greg Low
    I had a surprisingly large number of responses from my previous posting about learning Chinese. An update for those considering Rosetta Stone (www.rosettastone.com) for Chinese, Spanish or any other language that they offer:I had to renew my "Studio" subscription today and it's now a much worse deal than it was.It's now $75 for 6 months for Studio sessions. Online classes used to be 45 mins. Recently they reduced them to 20 mins. Given how often people have connection issues, etc. that 20 mins can disappear very quickly.They've also reduced the number you can attend. You used to be able to have 2 scheduled at any point in time. Now they limit you to 2 "group sessions" per month during the period. (You can pay for additional private sessions). The combination of these two changes now makes it much less useful. Two x 20 min sessions per month is an almost meaningless amount of practice. They also now automatically change you to auto-renew when you subscribe. They tell you where to remove this auto-renewal but the first 4 or 5 times that I went into that screen, no such option appeared. Later, an option did appear and I used it.Overall, things just aren't what they used to be at Rosetta Stone. It's now pretty hard to recommend the Studio option where it was a no-brainer before.FURTHER UPDATE: <sigh>Even after I renewed, I could not even connect to their "new" service. Although the system processed the renewal, it still tells me it's expired. My online chat person "Siva S" tells me that the problem is that I've purchased all 5 levels of the program. I can't wait till they explain to me how making an extra purchase from them stops me from logging on. Siva told me that they had "renewed" the program. I'd have to speak to Customer Care; they aren't available and then disconnected himself. Impressive (not).Their website is now full of issues too. It insists that my billing address is in the USA, even though it pretends to accept changes to it.Overall, it's gone from something that could be recommended (with some limitations) to now being an app to avoid. That's a pity as I liked much of it before.

    Read the article

  • Site-to-site VPN using MD5 instead of SHA and getting regular disconnection

    - by Steven
    We are experiencing some strange behavior with a site-to-site IPsec VPN that goes down about every week for 30 minutes (Iam told 30 minutes exactly). I don't have access to the logs, so it's difficult to troubleshoot. What is also strange is that the two VPN devices are set to use SHA hash algorithm but apparently end up agreeing to use MD5. Does anybody have a clue? or is this just insufficient information? Edit: Here is an extract of the log of one of the two VPN devices, which is a Cisco 3000 series VPN concentrator. 27981 03/08/2010 10:02:16.290 SEV=4 IKE/41 RPT=16120 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27983 03/08/2010 10:02:56.930 SEV=4 IKE/41 RPT=16121 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 27986 03/08/2010 10:03:35.370 SEV=4 IKE/41 RPT=16122 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) [… same continues for another 15 minutes …] 28093 03/08/2010 10:19:46.710 SEV=4 IKE/41 RPT=16140 xxxxxxxx IKE Initiator: New Phase 1, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28096 03/08/2010 10:20:17.720 SEV=5 IKE/172 RPT=1291 xxxxxxxx Group [xxxxxxxx] Automatic NAT Detection Status: Remote end is NOT behind a NAT device This end IS behind a NAT device 28100 03/08/2010 10:20:17.820 SEV=3 IKE/134 RPT=79 xxxxxxxx Group [xxxxxxxx] Mismatch: Configured LAN-to-LAN proposal differs from negotiated proposal. Verify local and remote LAN-to-LAN connection lists. 28103 03/08/2010 10:20:17.820 SEV=4 IKE/119 RPT=1197 xxxxxxxx Group [xxxxxxxx] PHASE 1 COMPLETED 28104 03/08/2010 10:20:17.820 SEV=4 AUTH/22 RPT=1031 xxxxxxxx User [xxxxxxxx] Group [xxxxxxxx] connected, Session Type: IPSec/LAN- to-LAN 28106 03/08/2010 10:20:17.820 SEV=4 AUTH/84 RPT=39 LAN-to-LAN tunnel to headend device xxxxxxxx connected 28110 03/08/2010 10:20:17.920 SEV=5 IKE/25 RPT=1291 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28113 03/08/2010 10:20:17.920 SEV=5 IKE/24 RPT=88 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28116 03/08/2010 10:20:17.920 SEV=5 IKE/66 RPT=1290 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28117 03/08/2010 10:20:17.930 SEV=5 IKE/25 RPT=1292 xxxxxxxx Group [xxxxxxxx] Received remote Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28120 03/08/2010 10:20:17.930 SEV=5 IKE/24 RPT=89 xxxxxxxx Group [xxxxxxxx] Received local Proxy Host data in ID Payload: Address xxxxxxxx, Protocol 0, Port 0 28123 03/08/2010 10:20:17.930 SEV=5 IKE/66 RPT=1291 xxxxxxxx Group [xxxxxxxx] IKE Remote Peer configured for SA: L2L: 1A 28124 03/08/2010 10:20:18.070 SEV=4 IKE/173 RPT=17330 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices. 28127 03/08/2010 10:20:18.070 SEV=4 IKE/49 RPT=17332 xxxxxxxx Group [xxxxxxxx] Security negotiation complete for LAN-to-LAN Group (xxxxxxxx) Responder, Inbound SPI = 0x56a4fe5c, Outbound SPI = 0xcdfc3892 28130 03/08/2010 10:20:18.070 SEV=4 IKE/120 RPT=17332 xxxxxxxx Group [xxxxxxxx] PHASE 2 COMPLETED (msgid=37b3b298) 28131 03/08/2010 10:20:18.750 SEV=4 IKE/41 RPT=16141 xxxxxxxx Group [xxxxxxxx] IKE Initiator: New Phase 2, Intf 2, IKE Peer xxxxxxxx local Proxy Address xxxxxxxx, remote Proxy Address xxxxxxxx, SA (L2L: 1A) 28135 03/08/2010 10:20:18.870 SEV=4 IKE/173 RPT=17331 xxxxxxxx Group [xxxxxxxx] NAT-Traversal successfully negotiated! IPSec traffic will be encapsulated to pass through NAT devices.

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • TypeScript first impressions

    - by Bertrand Le Roy
    Anders published a video of his new project today, which aims at creating a superset of JavaScript, that compiles down to regular current JavaScript. Anders is a tremendously clever guy, and it always shows in his work. There is much to like in the enterprise (good code completion, refactoring and adoption of the module pattern instead of namespaces to name three), but a few things made me rise an eyebrow. First, there is no mention of CoffeeScript or Dart, but he does talk briefly about Script# and GWT. This is probably because the target audience seems to be the same as the audience for the latter two, i.e. developers who are more comfortable with statically-typed languages such as C# and Java than dynamic languages such as JavaScript. I don’t think he’s aiming at JavaScript developers. Classes and interfaces, although well executed, are not especially appealing. Second, as any code generation tool (and this is true of CoffeeScript as well), you’d better like the generated code. I didn’t, unfortunately. The code that I saw is not the code I would have written. What’s more, I didn’t always find the TypeScript code especially more expressive than what it gets compiled to. I also have a few questions. Is it possible to duck-type interfaces? For example, if I have an IPoint2D interface with x and y coordinates, can I pass any object that has x and y into a function that expects IPoint2D or do I need to necessarily create a class that implements that interface, and new up an instance that explicitly declares its contract? The appeal of dynamic languages is the ability to make objects as you go. This needs to be kept intact. More technical: why are generated variables and functions prefixed with _ rather than the $ that the EcmaScript spec recommends for machine-generated variables? In conclusion, while this is a good contribution to the set of ideas around JavaScript evolution, I don’t expect a lot of adoption outside of the devoted Microsoft developers, but maybe some influence on the language itself. But I’m often wrong. I would certainly not use it because I disagree with the central motivation for doing this: Anders explicitly says he built this because “writing application-scale JavaScript is hard”. I would restate that “writing application-scale JavaScript is hard for people who are used to statically-typed languages”. The community has built a set of good practices over the last few years that do scale quite well, and many people are successfully developing and maintaining impressive applications directly in JavaScript. You can play with TypeScript here: http://www.typescriptlang.org

    Read the article

  • Moving from Tortoise to TFS

    - by MarkPearl
    The Past A few years ago my small software company made the jump from storing code on a shared folder to source code control. At the time we had evaluated a few of the options and settled on Tortoise SVN. The main motivation for going the SVN route was that we found a great plugin for Visual Studio that allowed us to avoid the command prompt for uploading changes (like I said we are windows programmers… command prompt bad!! ) and it was free. Up to now we have been pretty happy with SVN as it removed many of the worries that I had about how safe my code was on a shared folder and also gave us the opportunity to safely have several developers work on the same project at the same time. The only times when we have been unhappy has been when we have had SVN hell days – which pretty much occur when you are doing something out of the norm and suddenly SVN just won’t resolve conflicts or something along those lines. This happens once every 4 or 5 months and is not necessarily a problem caused directly by SVN – but a problem augmented by SVN. When you have SVN hell days you want to curse SVN! With that in mind I recently have been relooking at our source code control. I have explored using GIT and was very impressed by it and have also looked at TFS. From a source code control perspective I don’t want to get into a heated discussion on which one is better – but I do want to mention that I wear two hats in my organization – software developer & manager, and with the manager hat on I tend to sway the TFS route. So when I was given a coupon to test DiscountASP.Net Team Foundation Server Service for a year, I thought it was the perfect opportunity to try TFS in a distributed environment and also make the first step towards having an integrated development management system. Some of the things that appeal to me about DiscountASP’s offering are the following… Basic management / planning facilities like to do lists inside Visual Studio Daily backup of data on the server – we are developers, not IT managers and so the more of this I could outsource the better Distributed solution – all of us work remotely and so this was a big one as well. Registering and Setting Up with DiscountASP.NET The whole registration process was simple and intuitive. The web interface is not the most visually impressive one, but it is functional and a few seconds after I clicked the last submit button a email was sitting in my inbox giving me my control panel username and suggesting that I read the “Getting Started” article. The getting started article was easy to read and understand so no complaints there either. Next to set my dev environment to work. With a few references to the getting started article I had completed the whole setup process in a matter of minutes. Ten minutes after initiating the whole thing I was logged into VS2010 and creating my first TFS project. With the service that I signed up for, I have access for 5 users – which is sufficient for my internal needs. So from what I can tell, to set the rest of us up on the system I just need to supply them with their user credentials and url. My Concerns Resolved 1) Security So, a few concerns I had about the service. First and foremost – is it secure? I would hate for someone to get access to our code and the whole idea of putting it up on the internet is a concern for me. Turning to the Knowledge Base on the DiscountASP website this is one of the first question I can see answered. According to them it is secure. I have extracted their comment below regarding this. Our TFS hosting service is secure. We only accept HTTPS connections ensuring that any client-server data transmission is encrypted. At the network level, all of our systems are protected by multiple Juniper firewalls, Tipping Point's Intrusion Detection System (see Tipping Point's case study of our use here), and we also employ DDoS mitigation to add extra layers of security. Additionally, physical access to the servers is tightly restricted. Please see the security section of this Knowledge Base article for further details. 2) Web Portal Access The other big concern I have is regarding web portal access. In the ideal world I would like to be able to give my end users access to a web portal for reporting bugs etc. When I initially read through the FAQ of the site it mentioned that there was web portal access – but from what I can see this is just for “users”. Since I am limited to 5 users for the account, it would not be practical to set up external users that we could get feedback from on bugs etc. I would be interested if this is possible – and if so if someone could post it in the comments it would be much appreciated. If this isn’t possible, it is a slight let down as we rely heavily on end user feedback to get feedback and it would have been ideal to have gotten this within the service. Other than those two items, I didn’t have any real concerns that were unresolved. So where do I go from here? So time passed by from the initial writing of this post and as work whirred in and out of my inbox I have still not had a proper opportunity to give the service a test run. Recently though things have began to slow down and then surprise surprise I had another SVN Hell day. With that experience I had a new found resolve to get our team on TFS and so today we are going to start to use the service as a team. I am hoping that I do not have TFS hell days – but if I do, I will be sure to write about them. In short - the verdict is still out on whether this service is going to be invaluable to my business or whether it will create more headaches than it is worth BUT I am hopping it will be an invaluable service. I will only really be able to determine that in a few months… till then!

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Retrieving Custom Attributes Using Reflection

    - by Scott Dorman
    The .NET Framework allows you to easily add metadata to your classes by using attributes. These attributes can be ones that the .NET Framework already provides, of which there are over 300, or you can create your own. Using reflection, the ways to retrieve the custom attributes of a type are: System.Reflection.MemberInfo public abstract object[] GetCustomAttributes(bool inherit); public abstract object[] GetCustomAttributes(Type attributeType, bool inherit); public abstract bool IsDefined(Type attributeType, bool inherit); System.Attribute public static Attribute[] GetCustomAttributes(MemberInfo member, bool inherit); public static bool IsDefined(MemberInfo element, Type attributeType, bool inherit); If you take the following simple class hierarchy: public abstract class BaseClass { private bool result;   [DefaultValue(false)] public virtual bool SimpleProperty { get { return this.result; } set { this.result = value; } } }   public class DerivedClass : BaseClass { public override bool SimpleProperty { get { return true; } set { base.SimpleProperty = value; } } } Given a PropertyInfo object (which is derived from MemberInfo, and represents a propery in reflection), you might expect that these methods would return the same result. Unfortunately, that isn’t the case. The MemberInfo methods strictly reflect the metadata definitions, ignoring the inherit parameter and not searching the inheritance chain when used with a PropertyInfo, EventInfo, or ParameterInfo object. It also returns all custom attribute instances, including those that don’t inherit from System.Attribute. The Attribute methods are closer to the implied behavior of the language (and probably closer to what you would naturally expect). They do respect the inherit parameter for PropertyInfo, EventInfo, and ParameterInfo objects and search the implied inheritance chain defined by the associated methods (in this case, the property accessors). These methods also only return custom attributes that inherit from System.Attribute. This is a fairly subtle difference that can produce very unexpected results if you aren’t careful. For example, to retrieve the custom  attributes defined on SimpleProperty, you could use code similar to this: PropertyInfo info = typeof(DerivedClass).GetProperty("SimpleProperty"); var attributeList1 = info.GetCustomAttributes(typeof(DefaultValueAttribute), true)); var attributeList2 = Attribute.GetCustomAttributes(info, typeof(DefaultValueAttribute), true));   The attributeList1 array will be empty while the attributeList2 array will contain the attribute instance, as expected. Technorati Tags: Reflection,Custom Attributes,PropertyInfo

    Read the article

  • Web.NET: A Brief Retrospective

    - by Chris Massey
    It’s been several weeks since I had the pleasure of visiting Milan, and joining 150 enthusiastic web developers for a day of server-side frameworks and JavaScript. Lucky for me, I keep good notes. Overall the day went smoothly, with some solid logistics and very attentiveorganizerss, and an impressively diverse audience drawn by the fact that the event was ambitiously run in English. This was great in that it drew a truly pan-European audience (11 countries were represented on the day, and at least 1 visa had to be procured to get someone there!) It was trouble because, in some cases, it pushed speakers outside their comfort zone. Thankfully, despite a slightly rocky start, every session I attended was very well presented, and the consensus on the day was that the speakers were excellent. While I felt that a lot of the speakers had more that they wanted to cover, the topics were well-chosen, every room constantly had a stack of people in it, and all the sessions were pleasingly focused on code & demos. For all that the language barriers occasionally made networking a little challenging,organizerss Simone & Ugo nailed the logistics. Registration was slick, lunch was plentiful, and session management was great. The very generous Rui was kind enough to showcase a short video about Glimpse in his session, which seemed to go down well (Although the audio in the rooms was a little under-powered). Because I think you might need a mid-week chuckle, here are some out-takes.: And lets not forget the Hackathon. The idea was what having just learned about a stack of interesting technologies, attendees could spend an evening (fuelled by pizza and some good Github beer) hacking something together using them. Unfortunately, after a (great)10-hour day, and in many cases facing international travel in the morning, many of the attendees headed straight for their hotel rooms. This idea could work so beautifully, and I’m excited to see how it pans out in 2013. On top of the slick sessions, getting to finally meet Ugo and Simone in the flesh as a pleasure, as was the serendipitous introduction to the most excellent Rui. They’re all fantastic guys who are passionate about the web, and I’m looking forward to finding opportunities to work with them. Simone & Ugo put on a great event, and I’m excited to see what they do next year.

    Read the article

  • Unison synchronization problem. Roots are not identical after synchronization.

    - by binary255
    Hi. When I synchronize two folders using Unison, only one of the roots seems to be affected. Below are all the information I would think is necessary to figure out why it is working like it is. I'm using $ unison -version unison version 2.27.57 From the Ubuntu repositories. My work laptop: $ echo $UNISONLOCALHOSTNAME worklaptop $ pwd /home/userfoo $ ls -lAR .unison* .unison: total 8 drwxr-xr-x 2 userfoo userfoo 4096 2010-04-26 11:39 backups -rw-r--r-- 1 userfoo userfoo 231 2010-04-26 11:38 default.prf .unison/backups: total 0 .unisonroot: total 0 $ cat .unison/default.prf # Roots of the synchronization root = /home/userfoo/.unisonroot root = ssh://devel//home/userbar/.unisonroot path = * backuplocation = central backupdir = /home/.unison/backups backupprefix = $VERSION.bak $ mkdir .unisonroot/aDirectoryFrom-$UNISONLOCALHOSTNAME $ echo something >.unisonroot/aFileFrom-$UNISONLOCALHOSTNAME $ ls .unisonroot/ aDirectoryFrom-worklaptop aFileFrom-worklaptop And the Ubuntu server I want to synchronize with: $ echo $UNISONLOCALHOSTNAME workcmpuserbardevel $ pwd /home/userbar $ ls -lAR .unison* .unison: total 4 drwxr-xr-x 2 userbar userbar 4096 2010-04-26 11:38 .unison .unison/.unison: total 0 .unisonroot: total 0 $ mkdir .unisonroot/aDirectoryFrom-$UNISONLOCALHOSTNAME $ echo something >.unisonroot/aFileFrom-$UNISONLOCALHOSTNAME $ ls .unisonroot/ aDirectoryFrom-workcmpuserbardevel aFileFrom-workcmpuserbardevel I perform the unison synchronization: $ echo $UNISONLOCALHOSTNAME worklaptop $ unison Contacting server... Connected [//worklaptop//home/userfoo/.unisonroot -> //workcmpuserbardevel//home/userbar/.unisonroot] Looking for changes Warning: No archive files were found for these roots, whose canonical names are: /home/userfoo/.unisonroot //workcmpuserbardevel//home/userbar/.unisonroot This can happen either because this is the first time you have synchronized these roots, or because you have upgraded Unison to a new version with a different archive format. Update detection may take a while on this run if the replicas are large. Unison will assume that the 'last synchronized state' of both replicas was completely empty. This means that any files that are different will be reported as conflicts, and any files that exist only on one replica will be judged as new and propagated to the other replica. If the two replicas are identical, then no changes will be reported. If you see this message repeatedly, it may be because one of your machines is getting its address from DHCP, which is causing its host name to change between synchronizations. See the documentation for the UNISONLOCALHOSTNAME environment variable for advice on how to correct this. Donations to the Unison project are gratefully accepted: http://www.cis.upenn.edu/~bcpierce/unison Press return to continue.[<spc>] Waiting for changes from server Reconciling changes local workcmps... dir ----> aDirectoryFrom-worklaptop [f] file ----> aFileFrom-worklaptop [f] Proceed with propagating updates? [] y Propagating updates UNISON 2.27.57 started propagating changes at 11:49:14 on 26 Apr 2010 [BGN] Copying aDirectoryFrom-worklaptop from /home/userfoo/.unisonroot to //workcmpuserbardevel//home/userbar/.unisonroot [BGN] Copying aFileFrom-worklaptop from /home/userfoo/.unisonroot to //workcmpuserbardevel//home/userbar/.unisonroot [END] Copying aDirectoryFrom-worklaptop [END] Copying aFileFrom-worklaptop UNISON 2.27.57 finished propagating changes at 11:49:14 on 26 Apr 2010 Saving synchronizer state Synchronization complete (2 items transferred, 0 skipped, 0 failures) And then check the .unisonroot directory on the computer I started the synchronization from: $ ls .unisonroot/ aDirectoryFrom-worklaptop aFileFrom-worklaptop And on the server: $ echo $UNISONLOCALHOSTNAME workcmpuserbardevel $ ls .unisonroot/ aDirectoryFrom-worklaptop aFileFrom-worklaptop aDirectoryFrom-workcmpuserbardevel aFileFrom-workcmpuserbardevel As can be seen above, the contents of the laptop .unisonroot has not changed while the servers .unisonroot has. The desired result would have been that the two folders would have ended up being identical, holding the union of the contents of the two roots.

    Read the article

  • Explanation of the definition of interface inheritance as described in GoF book

    - by Geek
    I am reading the first chapter of the Gof book. Section 1.6 discusses about class vs interface inheritance: Class versus Interface Inheritance It's important to understand the difference between an object's class and its type. An object's class defines how the object is implemented.The class defines the object's internal state and the implementation of its operations.In contrast,an object's type only refers to its interface--the set of requests on which it can respond. An object can have many types, and objects of different classes can have the same type. Of course, there's a close relationship between class and type. Because a class defines the operations an object can perform, it also defines the object's type . When we say that an object is an instance of a class, we imply that the object supports the interface defined by the class. Languages like c++ and Eiffel use classes to specify both an object's type and its implementation. Smalltalk programs do not declare the types of variables; consequently,the compiler does not check that the types of objects assigned to a variable are subtypes of the variable's type. Sending a message requires checking that the class of the receiver implements the message, but it doesn't require checking that the receiver is an instance of a particular class. It's also important to understand the difference between class inheritance and interface inheritance (or subtyping). Class inheritance defines an object's implementation in terms of another object's implementation. In short, it's a mechanism for code and representation sharing. In contrast,interface inheritance(or subtyping) describes when an object can be used in place of another. I am familiar with the Java and JavaScript programming language and not really familiar with either C++ or Smalltalk or Eiffel as mentioned here. So I am trying to map the concepts discussed here to Java's way of doing classes, inheritance and interfaces. This is how I think of of these concepts in Java: In Java a class is always a blueprint for the objects it produces and what interface(as in "set of all possible requests that the object can respond to") an object of that class possess is defined during compilation stage only because the class of the object would have implemented those interfaces. The requests that an object of that class can respond to is the set of all the methods that are in the class(including those implemented for the interfaces that this class implements). My specific questions are: Am I right in saying that Java's way is more similar to C++ as described in the third paragraph. I do not understand what is meant by interface inheritance in the last paragraph. In Java interface inheritance is one interface extending from another interface. But I think the word interface has some other overloaded meaning here. Can some one provide an example in Java of what is meant by interface inheritance here so that I understand it better?

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >