Search Results

Search found 65997 results on 2640 pages for 'custom post type'.

Page 595/2640 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Record management system java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have a custom extension on top of SmartGWT but after some time it has proven not to be flexible enough. I also personally dislike the java-js compilation process and the whole GWT codebase. Not only is the design ugly, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet). I only have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced? What was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • How to use Nintex Reusable Workflow Template

    - by ybbest
    If you like to re-use your workflow logic over more than one list or library, you can create reusable workflow template. Here are the steps 1. Go to site settings and create reusable workflow template. 2. Select the content type you like the template to bound to and give a workflow a title. 3.Create your workflow the same way as you did for a list workflow and publish your workfow. 4. Finally, you need add your workflow to the list you like to run your workflow. 5. Go to workflow settings and add a Workflow. 6. Select the content type and configure the workflow as below 7. After you done this, your workflow will run as usual. Note: 1. You cannot conditionally start your workflow. 2. Your workflow is not automatically bound to the list when you add the content type to the list, you need to configure it manually as shown in step 4-6.

    Read the article

  • How to reload google dfp in ajax content? [migrated]

    - by cj333
    google dfp support ads in ajax content, but if I parse all the code in main page. it always show the same ads even turn page reload the ajax content. I read some article from http://productforums.google.com/forum/#!msg/dfp/7MxNjJk46DQ/4SAhMkh2RU4J. But my code not work. Any work code for suggestion? Thanks. Main page code: <script type='text/javascript'> $(document).ready(function(){ $('#next').live('click',function(){ var num = $(this).html(); $.ajax({ url: "album-slider.php", dataType: "html", type: 'POST', data: 'photo=' + num, success: function(data){ $("#slider").center(); googletag.cmd.push(function() { googletag.defineSlot('/1*******/ads-728-90', [728, 90], 'div-gpt-ad-1**********-'+ num).addService(googletag.pubads()); googletag.pubads().enableSingleRequest(); googletag.enableServices(); }); } }); }); }); </script> album-slider.php <!-- ads-728-90 --> <div id='div-gpt-ad-1**********-<?php echo $_GET['photo']; ?>' style='width:728px; height:90px;'> <script type='text/javascript'> googletag.cmd.push(function() { googletag.display('div-gpt-ad-1**********-<?php echo $_GET['photo']; ?>); }); </script>

    Read the article

  • CodePlex Daily Summary for Monday, January 31, 2011

    CodePlex Daily Summary for Monday, January 31, 2011Popular ReleasesMVC Controls Toolkit: Mvc Controls Toolkit 0.8: Fixed the following bugs: *Variable name error in the jvascript file that prevented the use of the deleted item template of the Datagrid *Now after the changes applied to an item of the DataGrid are cancelled all input fields are reset to the very initial value they had. *Other minor bugs. Added: *This version is available both for MVC2, and MVC 3. The MVC 3 version has a release number of 0.85. This way one can install both version. *Client Validation support has been added to all control...Office Web.UI: Beta preview (Source): This is the first Beta. it includes full source code and all available controls. Some designers are not ready, and some features are not finalized allready (missing properties, draft styles) ThanksASP.net Ribbon: Version 2.2: This release brings some new controls (part of Office Web.UI). A few bugs are fixed and it includes the "auto resize" feature as you resize the window. (It can cause an infinite loop when the window is too reduced, it's why this release is not marked as "stable"). I will release more versions 2.3, 2.4... until V3 which will be the official launch of Office Web.UI. Both products will evolve at the same speed. Thanks.Barcode Rendering Framework: 2.1.1.0: Final release for VS2008 Finally fixed bugs with code 128 symbology.HERB.IQ: HERB.IQ.UPGRADE.0.5.3.exe: HERB.IQ.UPGRADE.0.5.3.exexUnit.net - Unit Testing for .NET: xUnit.net 1.7: xUnit.net release 1.7Build #1540 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision) Ad...DoddleReport - Automatic HTML/Excel/PDF Reporting: DoddleReport 1.0: DoddleReport will add automatic tabular-based reporting (HTML/PDF/Excel/etc) for any LINQ Query, IEnumerable, DataTable or SharePoint List For SharePoint integration please click Here PDF Reporting has been placed into a separate assembly because it requies AbcPdf http://www.websupergoo.com/download.htmSpark View Engine: Spark v1.5: Release Notes There have been a lot of minor changes going on since version 1.1, but most important to note are the major changes which include: Support for HTML5 "section" tag. Spark has now renamed its own section tag to "segment" instead to avoid clashes. You can still use "section" in a Spark sense for legacy support by specifying ParseSectionAsSegment = true if needed while you transition Bindings - this is a massive feature that further simplifies your views by giving you a powerful ...Marr DataMapper: Marr DataMapper 1.0.0 beta: First release.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.3: Version: 2.0.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...Rawr: Rawr 4.0.17 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 and on the Version Notes page: http://rawr.codeplex.com/wikipage?title=VersionNotes As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you...Squiggle - A Free open source LAN Messenger: Squiggle 2.5 Beta: In this release following are the new features: Localization: Support for Arabic, French, German and Chinese (Simplified) Bridge: Connect two Squiggle nets across the WAN or different subnets Aliases: Special codes with special meaning can be embedded in message like (version),(datetime),(time),(date),(you),(me) Commands: cls, /exit, /offline, /online, /busy, /away, /main Sound notifications: Get audio alerts on contact online, message received, buzz Broadcast for group: You can ri...VivoSocial: VivoSocial 7.4.2: Version 7.4.2 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.4.2 release and see if they persist. If you have any questions about this release, please post them in our Support forums. If you are experiencing a bug or would like to request a new feature, please submit it to our issue tracker. Web Controls * Updated Business Objects and added a new SQL Data Provider File. Groups * Fixed a security issue whe...PHP Manager for IIS: PHP Manager 1.1.1 for IIS 7: This is a minor release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus several bug fixes (see change list for more details). Also, this release includes Russian language support. SHA1 codes for the downloads are: PHPManagerForIIS-1.1.0-x86.msi - 6570B4A8AC8B5B776171C2BA0572C190F0900DE2 PHPManagerForIIS-1.1.0-x64.msi - 12EDE004EFEE57282EF11A8BAD1DC1ADFD66A654mojoPortal: 2.3.6.1: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2361-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Parallel Programming with Microsoft Visual C++: Drop 6 - Chapters 4 and 5: This is Drop 6. It includes: Drafts of the Preface, Introduction, Chapters 2-7, Appendix B & C and the glossary Sample code for chapters 2-7 and Appendix A & B. The new material we'd like feedback on is: Chapter 4 - Parallel Aggregation Chapter 5 - Futures The source code requires Visual Studio 2010 in order to run. There is a known bug in the A-Dash sample when the user attempts to cancel a parallel calculation. We are working to fix this.NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.160: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release improves NodeXL's Twitter and Pajek features. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Windows Explorer and select "Extract All." Close Ex...Kooboo CMS: Kooboo CMS 3.0 CTP: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL, RavenDB and SQLCE. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder to enable...UOB & ME: UOB ME 2.6: UOB ME 2.6????: ???? V1.0: ???? V1.0 ??New ProjectsAuto Complete Control for ASP.NET: Autocomplete Control is a fully functional ASP.NET control for word suggestions and autocomplete. We had been using Ajax Control Toolkit AutoComplete Extender in our projects before, but we have needed some extra features and functionalities.Cours ESIEE: MAJ des cours ESIEE depuis la plateforme Icampus et autres documentsEngineering World Expenses: Demo expenses application for Engineering World 2011Entity Framework / Linq to Sql Poco Code Generator: Poco Orm data access layer (Dto) code generator for Entity Framework and Linq to Sql. Customizable code generation via simple templating system. Utilizes Managed Extensibility Framework (MEF) in order for application parts to dynamically composed and plug-able.linqish.py: Python module for manipulating iterables. An implementation of the .Net Framework's Linq to Objects for Python.Machinekey setter: This code sample is Windows Azure SDK 1.3 custom plugin. This sample do working at set custom key to machinekey of web.config file in your WebRole.MapReduce.NET: MapReduce.NET intends to implement the original paper proposed by Google on MapReduce.Marr DataMapper: Marr DataMapper provides a fast and easy to use wrapper around ADO.NET that enables you to focus more on your data access queries without having to write plumbing code. Load one-to-one, one-to-many, and hierarchical entity models with ease. No special base class required.Orchard Silverlight: Orchard module enabling embedding Silverlight applications and creating Silverlight-based content.RouteMagic: Library of useful routing helpers and classes.Smart Skelta Utilites: Smart Skelta Utilies will provide utilties like Visual Studio 2008 Skelta Starter Kit(Project Templates and Project Item Templates),Code Snippets for Skelta Components,Skleta Attachment Extracter Web based Logger,Skelta Server utility and others for skelta based development.Solfix: Solfix is a programming language tbat is work-in-progress, but it has a lot of functionality! You can make applications for console to windows applications. The main point of Solfix is to make coding easier and less time than before.SQLite Manager: A minimal manage for sqlite databases.State Search: StateSearch provides state search algoritms such as A*, IDA*, BestFirst, etc to solve problems such as puzzles and/or path searchingTable Check Custom Field Type: SharePoint Custom Field Type for displaying a list of values with checkboxes and people editors.testsgb: testWindows Phone 7 Extension Framework: An extension method framework for Windows Phone 7 to make your code more fluent and adding a lot of common functions you don't need to reproduce.

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Contents farms, scrapers sites, aggregators real world examples? [closed]

    - by Marco Demaio
    Contents farm, scrappers, aggregators real world examples? Could you plz clarify me: efreedom.com is a scraper site, not a content farm? Because it simply copies and pastes contents from stackoverflow. ehow.com and squidoo.com are contents farm? They don't copy and paste contents they just generate fresh new user generated content, but too much and too quickly. expert-exchange.com is NOT a content farm or a scraper site, right?! It's simply that many people (an me too) hates it (they also wrote to Matt Cutts) because it shows up hight in Google providing a useless question with no answer. There are also many sites that act as 'contents aggregators in the form of specialized directories' (let's call them CASD), I don't know how to else define them. Do they have a specific definition? Anyway are these type of CASD contents farms or scrapers sites or what else? Basically these CASD search for all sites of the same type i.e. “restaurants websites”, they copy and paste the contents found in “Restaurant A” and create in their aggregator site a new page called “Restaurant A”, then they do the same for all websites of the same type, thus creating a sort of directory of restaurants. Later on these CASD also sends an email to the owner of “Restaurant A” (usually the email is on the website) with a user and password to let him modify/update its own page on the CASD site. Later on these CASD might ask for money to the owner of “Restaurant A” because they bring him traffic, otherwise they remove its page on the aggregator. Someone could call these simply directories, but I think a directory is different because is something you need to add your site into by filling a form and not something that steals contents from your existing site without a specific acceptance from the site's owner. I also really wonder how Google will sort out all these mess sites packed of contents that show up more and more and everywhere in search results.

    Read the article

  • What is the structure of network managers system-connections files?

    - by Oyks Livede
    could anyone list the complete structure of the configuration files, which network manager stores for known networks in /etc/NetworkManager/system-connections for known networks? Sample (filename askUbuntu): [connection] id=askUbuntu uuid=81255b2e-bdf1-4bdb-b6f5-b94ef16550cd type=802-11-wireless [802-11-wireless] ssid=askUbuntu mode=infrastructure mac-address=00:08:CA:E6:76:D8 [ipv6] method=auto [ipv4] method=auto I would like to create some of them by my own using a script. However, before doing so I would like to know every possible option. Furthermore, this structure seems somehow to resemble the information you can get using the dbus for active connections. dbus-send --system --print-reply \ --dest=org.freedesktop.NetworkManager \ "$active_setting_path" \ # /org/freedesktop/NetworkManager/Settings/2 org.freedesktop.NetworkManager.Settings.Connection.GetSettings Will tell you: array [ dict entry( string "802-11-wireless" array [ dict entry( string "ssid" variant array of bytes "askUbuntu" ) dict entry( string "mode" variant string "infrastructure" ) dict entry( string "mac-address" variant array of bytes [ 00 08 ca e6 76 d8 ] ) dict entry( string "seen-bssids" variant array [ string "02:1A:11:F8:C5:64" string "02:1A:11:FD:1F:EA" ] ) ] ) dict entry( string "connection" array [ dict entry( string "id" variant string "askUbuntu" ) dict entry( string "uuid" variant string "81255b2e-bdf1-4bdb-b6f5-b94ef16550cd" ) dict entry( string "timestamp" variant uint64 1383146668 ) dict entry( string "type" variant string "802-11-wireless" ) ] ) dict entry( string "ipv4" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) dict entry( string "ipv6" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) ] I can create new setting files using the dbus (AddSettings() in /org/freedesktop/NetworkManager/Settings) passing this type of input, so explaining me this structure and telling me all possible options will also help. Afaik, this is a Dictionary{String, Dictionary{String, Variant}}. Will there be any difference creating config files directly or using the dbus?

    Read the article

  • Getting Started with TypeScript – Classes, Static Types and Interfaces

    - by dwahlin
    I had the opportunity to speak on different JavaScript topics at DevConnections in Las Vegas this fall and heard a lot of interesting comments about JavaScript as I talked with people. The most frequent comment I heard from people was, “I guess it’s time to start learning JavaScript”. Yep – if you don’t already know JavaScript then it’s time to learn it. As HTML5 becomes more and more popular the amount of JavaScript code written will definitely increase. After all, many of the HTML5 features available in browsers have little to do with “tags” and more to do with JavaScript (web workers, web sockets, canvas, local storage, etc.). As the amount of JavaScript code being used in applications increases, it’s more important than ever to structure the code in a way that’s maintainable and easy to debug. While JavaScript patterns can certainly be used (check out my previous posts on the subject or my course on Pluralsight.com), several alternatives have come onto the scene such as CoffeeScript, Dart and TypeScript. In this post I’ll describe some of the features TypeScript offers and the benefits that they can potentially offer enterprise-scale JavaScript applications. It’s important to note that while TypeScript has several great features, it’s definitely not for everyone or every project especially given how new it is. The goal of this post isn’t to convince you to use TypeScript instead of standard JavaScript….I’m a big fan of JavaScript. Instead, I’ll present several TypeScript features and let you make the decision as to whether TypeScript is a good fit for your applications. TypeScript Overview Here’s the official definition of TypeScript from the http://typescriptlang.org site: “TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source.” TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. To sum it up, TypeScript is a new language that can be compiled to JavaScript much like alternatives such as CoffeeScript or Dart. It isn’t a stand-alone language that’s completely separate from JavaScript’s roots though. It’s a superset of JavaScript which means that standard JavaScript code can be placed in a TypeScript file (a file with a .ts extension) and used directly. That’s a very important point/feature of the language since it means you can use existing code and frameworks with TypeScript without having to do major code conversions to make it all work. Once a TypeScript file is saved it can be compiled to JavaScript using TypeScript’s tsc.exe compiler tool or by using a variety of editors/tools. TypeScript offers several key features. First, it provides built-in type support meaning that you define variables and function parameters as being “string”, “number”, “bool”, and more to avoid incorrect types being assigned to variables or passed to functions. Second, TypeScript provides a way to write modular code by directly supporting class and module definitions and it even provides support for custom interfaces that can be used to drive consistency. Finally, TypeScript integrates with several different tools such as Visual Studio, Sublime Text, Emacs, and Vi to provide syntax highlighting, code help, build support, and more depending on the editor. Find out more about editor support at http://www.typescriptlang.org/#Download. TypeScript can also be used with existing JavaScript frameworks such as Node.js, jQuery, and others and even catch type issues and provide enhanced code help. Special “declaration” files that have a d.ts extension are available for Node.js, jQuery, and other libraries out-of-the-box. Visit http://typescript.codeplex.com/SourceControl/changeset/view/fe3bc0bfce1f#samples%2fjquery%2fjquery.d.ts for an example of a jQuery TypeScript declaration file that can be used with tools such as Visual Studio 2012 to provide additional code help and ensure that a string isn’t passed to a parameter that expects a number. Although declaration files certainly aren’t required, TypeScript’s support for declaration files makes it easier to catch issues upfront while working with existing libraries such as jQuery. In the future I expect TypeScript declaration files will be released for different HTML5 APIs such as canvas, local storage, and others as well as some of the more popular JavaScript libraries and frameworks. Getting Started with TypeScript To get started learning TypeScript visit the TypeScript Playground available at http://www.typescriptlang.org. Using the playground editor you can experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once it’s compiled. Here’s an example of the TypeScript playground in action:   One of the first things that may stand out to you about the code shown above is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container which helps tremendously with re-use and maintainability especially in enterprise-scale JavaScript applications. While you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes it quite easy especially if you come from an object-oriented programming background. An example of the Greeter class shown in the TypeScript Playground is shown next: class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } Looking through the code you’ll notice that static types can be defined on variables and parameters such as greeting: string, that constructors can be defined, and that functions can be defined such as greet(). The ability to define static types is a key feature of TypeScript (and where its name comes from) that can help identify bugs upfront before even running the code. Many types are supported including primitive types like string, number, bool, undefined, and null as well as object literals and more complex types such as HTMLInputElement (for an <input> tag). Custom types can be defined as well. The JavaScript output by compiling the TypeScript Greeter class (using an editor like Visual Studio, Sublime Text, or the tsc.exe compiler) is shown next: var Greeter = (function () { function Greeter(message) { this.greeting = message; } Greeter.prototype.greet = function () { return "Hello, " + this.greeting; }; return Greeter; })(); Notice that the code is using JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to take the variables and functions out of the global JavaScript scope. This is important feature that helps avoid naming collisions between variables and functions. In cases where you’d like to wrap a class in a naming container (similar to a namespace in C# or a package in Java) you can use TypeScript’s module keyword. The following code shows an example of wrapping an AcmeCorp module around the Greeter class. In order to create a new instance of Greeter the module name must now be used. This can help avoid naming collisions that may occur with the Greeter class.   module AcmeCorp { export class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } } var greeter = new AcmeCorp.Greeter("world"); In addition to being able to define custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript’s extends keyword. The following code shows an example of using inheritance to define two report objects:   class Report { name: string; constructor (name: string) { this.name = name; } print() { alert("Report: " + this.name); } } class FinanceReport extends Report { constructor (name: string) { super(name); } print() { alert("Finance Report: " + this.name); } getLineItems() { alert("5 line items"); } } var report = new FinanceReport("Month's Sales"); report.print(); report.getLineItems();   In this example a base Report class is defined that has a variable (name), a constructor that accepts a name parameter of type string, and a function named print(). The FinanceReport class inherits from Report by using TypeScript’s extends keyword. As a result, it automatically has access to the print() function in the base class. In this example the FinanceReport overrides the base class’s print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class using the super() call. TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects. The following code shows an example of an interface named Thing (from the TypeScript samples) and a class named Plane that implements the interface to drive consistency across the app. Notice that the Plane class includes intersect and normal as a result of implementing the interface.   interface Thing { intersect: (ray: Ray) => Intersection; normal: (pos: Vector) => Vector; surface: Surface; } class Plane implements Thing { normal: (pos: Vector) =>Vector; intersect: (ray: Ray) =>Intersection; constructor (norm: Vector, offset: number, public surface: Surface) { this.normal = function (pos: Vector) { return norm; } this.intersect = function (ray: Ray): Intersection { var denom = Vector.dot(norm, ray.dir); if (denom > 0) { return null; } else { var dist = (Vector.dot(norm, ray.start) + offset) / (-denom); return { thing: this, ray: ray, dist: dist }; } } } }   At first glance it doesn’t appear that the surface member is implemented in Plane but it’s actually included automatically due to the public surface: Surface parameter in the constructor. Adding public varName: Type to a constructor automatically adds a typed variable into the class without having to explicitly write the code as with normal and intersect. TypeScript has additional language features but defining static types and creating classes, modules, and interfaces are some of the key features it offers. So is TypeScript right for you and your applications? That’s a not a question that I or anyone else can answer for you. You’ll need to give it a spin to see what you think. In future posts I’ll discuss additional details about TypeScript and how it can be used with enterprise-scale JavaScript applications. In the meantime, I’m in the process of working with John Papa on a new Typescript course for Pluralsight that we hope to have out in December of 2012.

    Read the article

  • Why learning new things is not important on a job hunt? [closed]

    - by IAdapter
    I have just finished my job hunt. I think it was about 40 job interviews, I like to travel and get to know many companies. One thing I did not like is that they don't care about new technologies. I think only 2 persons asked me about new stuff in Java world. Most of them care if I know Java (certification and many years of experiance is not enough for them, they need to test me) For example in IBM they only cared what IBM products do I know. Have I ever used any custom extensions of WebSphere? I don't understand those questions. If I learn new frameworks every day then I can learn whatever technology they have very fast. So why it matters if I have ever used those "great" custom extensions of WebSphere? After those 40 interviews I have no reason to learn any new framework, because I see that they don't care. Why those "developers" don't ask questions about new technologies? are they so long at those comapnies that they don't care about new stuff?

    Read the article

  • RMS java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have custom extension on top of SmartGWT but after some time it has proven not to be enough flexible. I also personally dislike that java-js compilation process and the whole GWT codebase. Not only its ugly designed, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet) I only does have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it kinda hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced, what was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • Permission based Authorization vs. Role based Authorization - Best Practices - 11g

    - by Prakash Yamuna
    In previous blog posts here and here I have alluded to the support in OWSM for Permission based authorization and Role based authorization support. Recently I was having a conversation with an internal team in Oracle looking to use OWSM for their Web Services security needs and one of the topics was around - When to use permission based authorization vs. role based authorization? As in most scenarios the answer is it depends! There are trade-offs involved in using the two approaches and you need to understand the trade-offs and you need to understand which trade-offs are better for your scenario. Role based Authorization: Simple to use. Just create a new custom OWSM policy and specify the role in the policy (using EM Fusion Middleware Control). Inconsistent if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) - ex: the model for securing EJBs with roles or the model for securing Web App roles - is inconsistent. Since the model is inconsistent, tooling is also fairly inconsistent. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating OWSM custom policies. Permission based Authorization: More complex. You need to attach both an OWSM policy and create OPSS Permission authorization policies. (Note: OWSM leverages OPSS Permission based Authorization support). More appropriate if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) and want a consistent authorization model. Consistent Tooling for managing authorization across different resources (ex: EM Fusion Middleware Control). Better Lifecycle support in terms of T2P, etc. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating/editing OPSS Permission based authorization policies.

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

  • What is up with the Joy of Clojure 2nd edition?

    - by kurofune
    Manning just released the second edition of the beloved Joy of Clojure book, and while I share that love I get the feeling that many of the examples are already outdated. In particular, in the chapter on optimization the recommended type-hinting seems not to be allowed by the compiler. I don't know if this was allowable for older versions of Clojure. For example: (defn factorial-f [^long original-x] (loop [x original-x, acc 1] (if (>= 1 x) acc (recur (dec x) (*' x acc))))) returns: clojure.lang.Compiler$CompilerException: java.lang.UnsupportedOperationException: Can't type hint a primitive local, compiling:(null:3:1) Likewise, the chapter on core.logic seems be using an old API and I have to find workarounds for each example to accommodate the recent changes. For example, I had to turn this: (logic/defrel orbits orbital body) (logic/fact orbits :mercury :sun) (logic/fact orbits :venus :sun) (logic/fact orbits :earth :sun) (logic/fact orbits :mars :sun) (logic/fact orbits :jupiter :sun) (logic/fact orbits :saturn :sun) (logic/fact orbits :uranus :sun) (logic/fact orbits :neptune :sun) (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital))) into this, leveraging the pldb lib: (pldb/db-rel orbits orbital body) (def facts (pldb/db [orbits :mercury :sun] [orbits :venus :sun] [orbits :earth :sun] [orbits :mars :sun] [orbits :jupiter :sun] [orbits :saturn :sun] [orbits :uranus :sun] [orbits :neptune :sun])) (pldb/with-db facts (logic/run* [q] (logic/fresh [orbital body] (orbits orbital body) (logic/== q orbital)))) I am still pulling teeth to get the later examples to work. I am relatively new programming, myself, so I wonder if I am naively looking over something here, or are if these points I'm making legitimate concerns? I really want to get good at this stuff like type-hinting and core.logic, but wanna make sure I am studying up to date materials. Any illuminating facts to help clear up my confusion would be most welcome.

    Read the article

  • Why does void in C mean not void?

    - by Naftuli Tzvi Kay
    In strongly-typed languages like Java and C#, void (or Void) as a return type for a method seem to mean: This method doesn't return anything. Nothing. No return. You will not receive anything from this method. What's really strange is that in C, void as a return type or even as a method parameter type means: It could really be anything. You'd have to read the source code to find out. Good luck. If it's a pointer, you should really know what you're doing. Consider the following examples in C: void describe(void *thing) { Object *obj = thing; printf("%s.\n", obj->description); } void *move(void *location, Direction direction) { void *next = NULL; // logic! return next; } Obviously, the second method returns a pointer, which by definition could be anything. Since C is older than Java and C#, why did these languages adopt void as meaning "nothing" while C used it as "nothing or anything (when a pointer)"?

    Read the article

  • How to build MVC Views that work with polymorphic domain model design?

    - by Johann de Swardt
    This is more of a "how would you do it" type of question. The application I'm working on is an ASP.NET MVC4 app using Razor syntax. I've got a nice domain model which has a few polymorphic classes, awesome to work with in the code, but I have a few questions regarding the MVC front-end. Views are easy to build for normal classes, but when it comes to the polymorphic ones I'm stuck on deciding how to implement them. The one (ugly) option is to build a page which handles the base type (eg. IContract) and has a bunch of if statements to check if we passed in a IServiceContract or ISupplyContract instance. Not pretty and very nasty to maintain. The other option is to build a view for each of these IContract child classes, breaking DRY principles completely. Don't like doing this for obvious reasons. Another option (also not great) is to split the view into chunks with partials and build partial views for each of the child types that are loaded into the main view for the base type, then deciding to show or hide the partial in a single if statement in the partial. Also messy. I've also been thinking about building a master page with sections for the fields that only occur in subclasses and to build views for each subclass referencing the master page. This looks like the least problematic solution? It will allow for fairly simple maintenance and it doesn't involve code duplication. What are your thoughts? Am I missing something obvious that will make our lives easier? Suggestions?

    Read the article

  • Case Management In-Depth: Cases & Case Activities Part 1 – Activity Scope by Mark Foster

    - by JuergenKress
    In the previous blog entry we looked at stakeholders and permissions, i.e. how we control interaction with the case and its artefacts. In this entry we’ll look at case activities, specifically how we decide their scope, in the next part we’ll look at how these activities relate to the over-arching case and how we can effectively visualize the relationship between the case and its activities. Case Activities As mentioned in an earlier blog entry, case activities can be created from: BPM processes Human Tasks Custom (Java Code) It is pretty obvious that we would use custom case activities when either: we already have existing code that we would like to form part of a case we cannot provide the necessary functionality with a BPM process or simple Human Task However, how do we determine what our BPM process as a case activity contains? What level of granularity? Take the following simple BPM process Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: ACM,BPM,Mark Foster,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Scheduled Deprecation of Legacy Obligation Features

    - by Wes Curtis
    The Obligation object in ETPM includes some functionality and tables that, to our knowledge, are not being used by customers and implementers are this time.  Removing this logic and the related tables should benefit the performance of and simplify logic executed during Obligation maintenance processing. The Release Notes included with ETPM v2.3.1 announced that the product plans to deprecate the functionality on Obligation for Contract Terms, Contract Quantities, Tax Exemptions, Terms & Conditions and Obligation Type Start Options.  Our plan is to remove this functionality in the next release of ETPM. We have already confirmed with most project teams that these features are not being used so the deprecation should have no impact on existing designs or process. If you think your project may be impacted by this deprecation, please review any Business Object that has been created for the Obligation maintenance object to make sure that no elements are being defined for any of the following child tables: -          CI_SA_CONTERM -          CI_SA_CONT_QTY -          CI_TOU_CONT_VAL -          CI_SA_TC   As part of this deprecation, the following administrative tables are being removed along with their related metadata: -          Contract Quantity Type -          Tax Exempt Type -          Terms and Conditions Please contact myself or the Oracle Tax Product Management team if your implementation has actually used these objects in their designs. We can discuss options to mitigate impacts of this planned deprecation.  We will continue to announce planned deprecations in the Release Notes for each release and will contact project teams ahead of time to confirm that these deprecations will have little to no impact on our customers.

    Read the article

  • UIView vs CCLayer and Making gestures work in Cocos2d

    - by Lewis
    now I've been been searching for an answer to this question for at least 3 days now. I've tried on many cocos2d forums, including the official one and have heard nothing back. I've found a project which uses custom gestures: https://github.com/melle/OneFingerRotationGestureDemo Explanation here: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ Now I want to implement that behaviour onto a sprite in a cocos2d application. I've tried to do this but it fails to work. It uses a view controller which inherits like this: @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> Now my question is how would I implement the OneFingerRotationGesture behaviour onto a CCSprite in cocos2d 2.0? As interfaces in cocos2d look like this: @interface HelloWorldLayer : CCLayer Now I have asked a similar question to this on stack overflow and a user directed me to this link: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which I believe makes use of gestures (like the first github linked project) but not custom gestures. I lack the obj-c skills to work out and implement the functionality into my game, so I would appreciate it if someone could explain the differences between CCLayer and UIViewController, and help me implement the OneFingerRotation gesture into a cocos2d 2.0 project. Regards, Lewis.

    Read the article

  • Semantic Form Markup for Yes or No Questions - Or Should I Tell my Designers to Bugger Off?

    - by sholsinger
    I frequently receive mock-ups of HTML forms with the following prototype: Some long winded yes or no question?   (o) Yes   ( ) No The (o) and ( ) in this prototype represent radio buttons. My personal view is that if the question has only a true or false value then it should be a check box. That said, I have seen this sort of "layout" from almost every designer I've ever worked with. If I were not to question their decision, or question the client's decision, I'd probably mark it up like this: <p class="pseudo_label">Some long winded yes or no question?</p> <input type="radio" name="the_question" id="the_question_yes" value="1"> <label for="the_question_yes" class="after_radio">Yes</label> <input type="radio" name="the_question" id="the_question_no" value="0"> <label for="the_question_no" class="after_radio">No</label> I really don't want to do that. I want to push back and convince them that this should really be a check box and not two radio buttons. But my question is, if I can't convince them – you're welcome to help me try – how should I code that original design requirement such that it is semantic and at least understandable for screen reader users? If I were able to convince my tormentors to change their minds, I would likely code it in the following fashion: <label for="the_question">Some long winded yes or no question?</label> <input type="checkbox" name="the_question" id="the_question" value="1"> What do you think about this issue? Should I push back? Possibly more importantly is either way semantically correct?

    Read the article

  • How do I connect to my running VM via virsh?

    - by Avery Chan
    My VM has already been started via virsh start chameleon.ootbdev. When I do a virsh console chameleon.ootbdev I get the following output: Connected to domain chameleon.ootbdev Escape character is ^] error: internal error cannot find character device (null) Doing a google search on this led me to this "solution". Unfortunately, editing the domain via virsh edit chameleon.ootbdev doesn't seem to stick. I suspect the issue is that I'm inserting the XML incorrectly: the instructions from the link ask me to insert the following XML into the domain XML file. <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> I've posted my domain XML file to pastebin here. This is AFTER I've tried to insert the above XML. I inserted this XML after the </devices> block. My primary question is: How do I connect to the running VM? A secondary question would be: How do I edit the domain file with the above XML and get the changes to stick?

    Read the article

  • connecting to freenx server : configuration error

    - by Sandeep
    I am not able to pin point what is missing. I have configured freenx-server (useradd, passwd etc). However, server drops the connection after authentication. Please note my server is ubuntu 10.04 and client recent version. Below is the error log NX> 203 NXSSH running with pid: 6016 NX> 285 Enabling check on switch command NX> 285 Enabling skip of SSH config files NX> 285 Setting the preferred NX options NX> 200 Connected to address: 192.168.2.2 on port: 22 NX> 202 Authenticating user: nx NX> 208 Using auth method: publickey HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.5.0) NX> 105 hello NXCLIENT - Version 3.2.0 NX> 134 Accepted protocol: 3.2.0 NX> 105 SET SHELL_MODE SHELL NX> 105 SET AUTH_MODE PASSWORD NX> 105 login NX> 101 User: sandeep NX> 102 Password: NX> 103 Welcome to: ubuntu user: sandeep NX> 105 listsession --user="sandeep" --status="suspended,running" --geometry="1366x768x24+render" --type="unix-gnome" NX> 127 Sessions list of user 'sandeep' for reconnect: Display Type Session ID Options Depth Screen Status Session Name ------- ---------------- -------------------------------- -------- ----- -------------- ----------- ------------------------------ NX> 148 Server capacity: not reached for user: sandeep NX> 105 startsession --link="lan" --backingstore="1" --encryption="1" --cache="16M" --images="64M" --shmem="1" --shpix="1" --strict="0" --composite="1" --media="0" --session="t" --type="unix-gnome" --geometry="1366x768" --client="linux" --keyboard="pc105/us" --screeninfo="1366x768x24+render" ssh_exchange_identification: Connection closed by remote host NX> 280 Exiting on signal: 15

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • Ubuntu 12.04 problem with E160 huawei - can't detect the device and freezing system

    - by Matt
    I have just installed 12.04 and plugged in E160 and nothing happened - modem doesn't mount. I have found this solution : Ubuntu does not mount some Huawei devices due to bugs, problems etc. See if these work: 1st option: Connect the USB modem. After 10 seconds, type this in a terminal window: lsusb The output will be like this: Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 004: ID 12d1:140b Huawei Technologies Co., Ltd. Bus 004 Device 002: ID 413c:3016 Dell Computer Corp. Optical 5-Button Wheel Mouse Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 005: ID 0b97:7762 O2 Micro, Inc. Oz776 SmartCard Reader Bus 002 Device 004: ID 413c:8103 Dell Computer Corp. Wireless 350 Bluetooth Bus 002 Device 003: ID 0b97:7761 O2 Micro, Inc. Oz776 1.1 Hub Bus 002 Device 002: ID 413c:a005 Dell Computer Corp. Internal 2.0 Hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The device is a Huawei modem, so let's look at the output. The relevant entry is: Bus 004 Device 004: ID 12d1:140b Huawei Technologies Co., Ltd Hence, you must type: sudo modprobe usbserial vendor=0x12d1 product=0x140b 2nd option Download usb-modeswitch and usb-modeswitch-data packages from packages.ubuntu.com. Install them through the command: sudo dpkg -i usb-modeswitch*.deb 3rd option Try a combination of both. but with no result. The modem is still not detected. I've tried to add a new connection but the system can't see my device in setup dialogue. Also I have noticed that when I open eg. terminal and try to type sth, the system freezes for a while.. Thanks for help!

    Read the article

  • What Should You Look for In a CRM Demo?

    - by charles.knapp
    I have helped firms evaluate software demos and delivered demos in diverse industries such as manufacturing, healthcare, life sciences, and travel (to name just a few). Here are a few suggestions. First, which vendor has the best fit for your industry? Make sure that the vendor demo staff tell you clearly throughout the demo (not just in a passing comment), what portion of each business process and screen is standard, what has been configured, what has been custom coded, and what has been provided by a partner. If you don't keep asking, what you buy may be less useful than what you saw. This will lead to added (and unbudgeted) costs and time. Second, what are the roles of the primary users? What are their top-most needs, such as exception-oriented dashboards or rapid data entry? Can you get a demo for each key role, showing how the software fits a typical workday? Have the vendor repeatedly tell you what is standard, configured, custom coded, or provided by a partner. Third, how well does the demo balance ease of use with completeness of business processes? One common approach is to hide needed fields or steps that are of low visual value. Another approach is to focus heavily on a visually appealing capability, while downplaying the fit with your key business processes. Result: despite their business acumen, demo attendees may not focus adequately on gaps in business fit So, look for complete disclosure and complete CRM. To arrange a demo from Oracle, please visit http://www.oracle.com/crm.

    Read the article

  • C# Role Provider for multiple applications

    - by Juventus18
    I'm making a custom RoleProvider that I would like to use across multiple applications in the same application pool. For the administration of roles (create new role, add users to role, etc..) I would like to create a master application that I could login to and set the roles for each additional application. So for example, I might have AppA and AppB in my organization, and I need to make an application called AppRoleManager that can set roles for AppA and AppB. I am having troubles implementing my custom RoleProvider because it uses an initialize method that gets the application name from the config file, but I need the application name to be a variable (i.e. "AppA" or "AppB") and passed as a parameter. I thought about just implementing the required methods, and then also having additional methods that pass application name as a parameter, but that seems clunky. i.e. public override CreateRole(string roleName) { //uses the ApplicationName property of this, which is set in web.config //creates role in db } public CreateRole(string ApplicationName, string roleName) { //creates role in db with specified params. } Also, I would prefer if people were prevented from calling CreateRole(string roleName) because the current instance of the class might have a different applicationName value than intended (what should i do here? throw NotImplementedException?). I tried just writing the class without inheriting RoleProvider. But it is required by the framework. Any general ideas on how to structure this project? I was thinking make a wrapper class that uses the role provider, and explicitly sets the application name before (and after) and calls to the provider something like this: static class RoleProviderWrapper { public static CreateRole(string pApplicationName, string pRoleName) { Roles.Provider.ApplicationName = pApplicationName; Roles.Provider.CreateRole(pRoleName); Roles.Provider.ApplicationName = "Generic"; } } is this my best-bet?

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >