Search Results

Search found 11195 results on 448 pages for 'disconnected environment'.

Page 73/448 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Using JuJu with private Openstack cloud deployment?

    - by user76054
    I'm seeing a number of problems trying to use JuJu with our internally deployed Openstack cloud. Most of this appears to be centered around DNS host resolution as well as the need to deal with our company's internal HTTP proxies. Our Openstack deployment relies upon an unroutable 172.16.0.0/12 block of addresses for VLAN allocation to each project (tenant) hosted on our internal cloud. User's have the option of assigning one or more floating addresses to instances, allocated from a block of routable addresses on our internal companies LAN. Currently, Openstack doesn't register instance names with anything other than the DNSMASQ service running on the cloud controller. As such, there's no way to resolve this address through our internal DNS hierarchy (this issue has already been reported as Bug #945505). As such, even though I can bootstrap my JuJu server node, I can't connect to it with the JuJu client, since it can't resolve the local (private) network name. I am able to ssh to the node, once I've assigned it an internally routable (i.e. floating) address. Which leads to the next issue. Next, to install software on an instance running in our cloud, it must have our internal proxy address defined - either in the apt.conf file or via environment variables. Unfortunately, when bootstrapping the server node, there's no provision to pass this info into a instance via JuJu environment.yaml file (if this is even the best way to handle this issue). As a result, the bootstrap node is unable to install the required packages. I'm assuming (dangerous, I know) that the way that I've deployed Openstack in our internal environment is probably not unique. Has anyone else encountered these issues? And more importantly, are work arounds available? Regards, Ross

    Read the article

  • Installing Oracle Event Processing 11g by Antoney Reynolds

    - by JuergenKress
    Earlier this month I was involved in organizing the Monument Family History Day. It was certainly a complex event, with dozens of presenters, guides and 100s of visitors. So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP). CEP has a developer side based on Eclipse and a runtime environment. Server install The server install is very straightforward (documentation). It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are: Download required software JRockit - I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”. Oracle Event Processor - I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)” Install JRockit Run the JRockit installer, the download is an executable binary that just needs to be marked as executable. Install CEP Unzip the downloaded file Run the CEP installer, the unzipped file is an executable binary that may need to be marked as executable. Choose a custom install and add the examples if needed. It is not recommended to add the examples to a production environment but they can be helpful in development. Developer Install The developer install requires several steps (documentation). A developer install needs access to the software for the server install, although JRockit isn’t necessary for development use. Read the full article by Antony Reynolds. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress,CEP,Reynolds

    Read the article

  • Getting Xbox Live via a wired network with my laptop that has internet access wirelessly

    - by Alex Franco
    I'm running the latest version (as of yesterday anyways) of Ubuntu Desktop 64bit, but installed on my laptop if it makes a difference. I had Windows 7 preinstalled when i bought it and it worked fine with the wireless from my house and bridging the connection with a LAN to my xbox for Live. Now with Ubuntu I tried the same setup, but I'm unfamiliar with Ubuntu so I didn't get far. Best I got so far is wireless internet on my laptop and a wired connection to the xbox that continually connects and disconnects. Heres my network settings. if theres fields not included its because theyre empty on mine or theyre my MAC address or network password Wireless Network 1 settings: Connect Automatically: Checked. Available to all Users: Checked Wireless: SSID: Franco's Mode: Infrastructure MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Wired Network 1: Connect Automatically: Checked Available to all Users: Checked Wired: MTU: Automatic IPv4 Settings: Method: Automatic (DHCP) IPv6 Settings: Method: Automatic Any help would be greatly appreciated. EDIT: 6:26pm It seems to be staying connected now. Doing the Network test on my xbox it pickups the network, but cannot detect any PC. Restarting the Xbox, however, leaves my computer unable to connect bringing up the Wire Network disconnected 'blip' every minute or so again. Before I had restarted the Xbox it said "Connected 100 MB/s". Now it only says "connecting". I did have my computer and xbox on in this Wired Network Disconnected blip cycle for a long period of time so it may have finally connected, just without the ability to detect my laptop. I left for 2 hours or so in the middle of typing up the original question. I finished posting this when i got back and then tried to mess with it a bit again, in case youre wondering why i didnt include this before... I've said too much. Forgive my long-winded fingers :p

    Read the article

  • Handling Deployment to Multiple Environments

    - by JayGee
    How should I handle deploying web applications to multiple servers? Constraints I have a dev, test and prod environment. No build server is available. Developers can't deploy to prod. The people that do deploy to prod copy files from test to prod. They don't have VS installed. Currently The way it's handled is using web.config transform. However, to deploy to prod involves putting prod code on the test server where it's copied over. Problem Sometimes simple mistakes are made, such as forgetting to change test back to the right environment after deployment. Or the test config gets moved to prod instead of the prod config. Solution So the question is, what is the best way to prevent mistakes from happening? My first thought is let the app determine which server it's on at runtime and use the appropriate settings/connection strings/etc... However, the server names could change in the not too distant future. So if multiple apps are hard coded, that would mean updating all of them. The easiest way to handle that situation would be to place a DLL in the GAC that determines the environment. Are there any drawbacks or possible complications that this would cause? Or is there a better solution to the problem than this?

    Read the article

  • Should a programmer "think" for the client?

    - by P.Brian.Mackey
    I have gotten to the point where I hate requirements gathering. Customer's are too vague for their own good. In an agile environment, where we can show the client a piece of work to completion it's not too bad as we can make small regular corrections/updates to functionality. In a "waterfall" type in environment (requirements first, nearly complete product next) things can get ugly. This kind of environment has led me to constantly question requirements. E.G. Customer wants "automatically convert input to the number 1" (referring to a Qty in an order). But what they don't think about is that "input" could be a simple type-o. An "x" in a textbox could be a "woops" not I want 1 of those "toothpaste" products. But, there's so much in the air with requirements that I could stand and correct for hours on end smashing out what they want. This just isn't healthy. Working for a corporation, I could try to adjust the culture to fit the agile model that would help us (no small job, above my pay grade). Or, sweep ugly details under the rug and hope for the best. Maybe my customer is trying to get too close to the code? How does one handle the problem of "thinking for the client" without pissing them off with too many questions?

    Read the article

  • [Kubuntu 14.04][Eclipse] (ADT) crashes at button OK from Project properties

    - by nouseforname
    Since i upgraded to kubuntu 14.04, my Eclipse crashes at different situations. Mostly i can "simulate" it when going to project properties and press ok. Then it always crashes. My system: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS" My Java: java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) My ADT Version: Android Development Toolkit Version: 23.0.0.1245622 I already tried to add this in adt-bundle-linux-x86_64/eclipse/configuration/configuration.ini org.eclipse.swt.browser.DefaultType=mozilla -Dorg.eclipse.swt.browser.DefaultType=mozilla Error: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fe049eb1718, pid=5964, tid=140601811232512 # # JRE version: Java(TM) SE Runtime Environment (8.0_05-b13) (build 1.8.0_05-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libgobject-2.0.so.0+0x19718] g_object_get_qdata+0x18 # # Core dump written. Default location: /home/maddin/core or core.5964 # # An error report file with more information is saved as: # /home/maddin/hs_err_pid5964.log Compiled method (nm) 28866 4166 n 0 org.eclipse.swt.internal.gtk.OS::_g_object_get_qdata (native) total in heap [0x00007fe051da6790,0x00007fe051da6af0] = 864 relocation [0x00007fe051da68b0,0x00007fe051da68f8] = 72 main code [0x00007fe051da6900,0x00007fe051da6ae8] = 488 oops [0x00007fe051da6ae8,0x00007fe051da6af0] = 8 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Now, as soon as i change SystemSettings - Application Apperance - GTK - GTKn-Design to something else but "oxygen-gtk" this crash doesn't happen anymore. But the application appearance also is ugly. Beside that i get a lot of errors/warnings like that: (SWT:6148): GLib-GObject-CRITICAL **: g_closure_add_invalidate_notifier: assertion 'closure->n_inotifiers < CLOSURE_MAX_N_INOTIFIERS' failed or other GTK warnings from the particular design, not having theme-engine. Which actually doesn't cause any crahs it seems so far. So i have 3 options: accept crashes accept warnings (maybe the best choice) accept ugly design What can i do to solve this issue without changing the design settings?

    Read the article

  • can't run sqldeveloper on Ubuntu

    - by nazar_art
    I tried to install sqldeveloper by following way: Download SQL Developer from Oracle website (I chose Other Platforms download). Extract file to /opt: sudo unzip sqldeveloper-*-no-jre.zip -d /opt/ sudo chmod +x /opt/sqldeveloper/sqldeveloper.sh Linking over an in-path launcher for Oracle SQL Developer: sudo ln -s /opt/sqldeveloper/sqldeveloper.sh /usr/local/bin/sqldeveloper Edit /usr/local/bin/sqldeveloper.sh replace it's content to: #!/bin/bash cd /opt/sqldeveloper/sqldeveloper/bin ./sqldeveloper "$@" Run SQL Developer: sqldeveloper But it shows next output: nazar@lelyak-desktop:/opt/sqldeveloper? ./sqldeveloper.sh Oracle SQL Developer Copyright (c) 1997, 2014, Oracle and/or its affiliates. All rights reserved. LOAD TIME : 401# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f3b2dcacbe0, pid=20351, tid=139892273444608 # # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 1.7.0_65-b17) # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops) # Problematic frame: # C 0x00007f3b2dcacbe0 # # Core dump written. Default location: /opt/sqldeveloper/sqldeveloper/bin/core or core.20351 # # An error report file with more information is saved as: # /tmp/hs_err_pid20351.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # /opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 1193: 20351 Aborted (core dumped) ${JAVA} "${APP_VM_OPTS[@]}" ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} "${APP_APP_OPTS[@]}" 134 nazar@lelyak-desktop:/opt/sqldeveloper? java -version java version "1.7.0_65" Java(TM) SE Runtime Environment (build 1.7.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) Here is content of /tmp/hs_err_pid20351.log How to solve this trouble?

    Read the article

  • Tips for achieving "continual" delivery

    - by Ben
    A team is experiencing difficulty releasing software on a frequent basis (once every week). What follows is a typical release timeline: During the iteration: Developers work on stories on the backlog on short-lived (this is enthusiastically enforced) feature branches based on the master branch. Developers frequently pull their feature branches into the integration branch, which is continually built and tested (as far as the test coverage goes) automatically. The testers have the ability to auto-deploy integration to a staging environment and this occurs multiple times per week, enabling continual running of their test suites. Every Monday: there is a release planning meeting to determine which stories are "known good" (based on the testers' work), and hence will be in the release. If there is a known issue with a story, the source branch is pulled out of integration. no new code (only bug fixes requested by the testers) may be pulled into integration on this Monday to ensure the testers have a stable codebase to cut a release from. Every Tuesday: The testers have tested the integration branch as much as they possibly can have given the time available and there are no known bugs so a release is cut and pushed out to the production nodes slowly. This sounds OK in practise, but we have found that it is incredibly difficult to achieve. The team sees the following symptoms "subtle" bugs are found on production that were not identified on the staging environment. last minute hot-fixes continue into the Tuesday. problems on the production environment require roll-backs which blocks continued development until a successful live deployment is achieved and the master branch can be updated (and hence branched from). I think test coverage, code quality, ability to regression test quickly, last minute changes and environmental differences are at play here. Can anyone offer any advice regarding how best to achieve "continual" delivery?

    Read the article

  • Amazon CloudFormations and Oracle Virtual Assembly Builder

    - by llaszews
    Yesterday I blogged about AWS AMIs and Oracle VM templates. These are great mechanisms to stand up an initial cloud environment. However, they don't provide the capability to manage, provision and update an environment once it is up and running. This is where AWS Cloud Formations and Oracle Virtual Assembly Builder comes into play. In a way, these tools/frameworks pick up where AMIs and VM templates leave off. Once again, there a similar offers from AWS and Oracle that compliant and also overlap with each other. Let's start by looking at the definitions: AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. AWS CloudFormations Oracle Virtual Assembly Builder - Oracle Virtual Assembly Builder makes it possible for administrators to quickly configure and provision entire multi-tier enterprise applications onto virtualized and cloud environments. Oracle VM Builder As with the discussion around should you use AMI or VM Templates, there are pros and cons to each: 1. CloudFormation is JSON, Assembly Builder is GUI and CLI 2. VM Templates can be used in any private or public cloud environment. Of course, CloudFormations is tied to AWS public cloud

    Read the article

  • My WiFi gets deauthenticated every few minutes or seconds (Reason: 7)

    - by dan
    My Wifi on my new Thinkpad W520 running Natty keeps dropping out and coming back on. Output from dmesg below. Any advice? [30493.687552] wlan0: authenticate with e0:91:f5:ef:7b:b2 (try 1) [30493.689127] wlan0: authenticated [30493.689144] wlan0: associate with e0:91:f5:ef:7b:b2 (try 1) [30493.693592] wlan0: RX AssocResp from e0:91:f5:ef:7b:b2 (capab=0x411 status=0 aid=4) [30493.693595] wlan0: associated [31631.172868] wlan0: deauthenticated from e0:91:f5:ef:7b:b2 (Reason: 7) [31631.211847] cfg80211: All devices are disconnected, going to restore regulatory settings [31631.211868] cfg80211: Restoring regulatory settings [31631.211873] cfg80211: Calling CRDA to update world regulatory domain [31631.215037] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [31631.215042] cfg80211: World regulatory domain updated: [31631.215044] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [31631.215046] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [31631.215049] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [31631.215051] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [31631.215053] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [31631.215055] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [31632.289638] wlan0: authenticate with e0:91:f5:ef:7b:b2 (try 1) [31632.291262] wlan0: authenticated [31632.291276] wlan0: associate with e0:91:f5:ef:7b:b2 (try 1) [31632.295119] wlan0: RX AssocResp from e0:91:f5:ef:7b:b2 (capab=0x411 status=0 aid=4) [31632.295123] wlan0: associated [31886.234836] wlan0: deauthenticated from e0:91:f5:ef:7b:b2 (Reason: 7) [31886.306735] cfg80211: All devices are disconnected, going to restore regulatory settings [31886.306740] cfg80211: Restoring regulatory settings [31886.306744] cfg80211: Calling CRDA to update world regulatory domain

    Read the article

  • Benefits of Server-side Coding

    There are numerous advantages to server scripting languages over client side languages in regards to creating web sites that are more compelling compared to a standard static site. Server side scripting are scripts that are executed on a web server during the compilation of data to return to a client. These scripts allow developers to modify the content that is being sent to the user prior to the return of the data to the user as well as store information about the user. In addition, server side scripts allow for a controllable environment in which they can be executed. This cannot be said for client side languages because the developer cannot control the users’ environment compared to a web server. Some users may turn off client scripts, some may be only allow limited access on the system and others may be able to gain full control of the environment.  I have been developing web applications for over 9 years, and I have used server side languages for most of the applications I have built.  Here is a list of common things I have developed with server side scripts. List of Common Generic Functionality Send Email FTP Files Security/ Access Control Encryption URL rewriting Data Access Data Creation I/O Access The one important feature server side languages will help me with on my website is Data Access because my component will be backed with a SQL server database. I believe that form validation is one instance where I might see server-side scripts and JavaScript used interchangeably because it does not matter how or where the data is validated as long as the data that gets inserted is valid. However, I would have to say that my personal experience would have to sway me in deciding what type of languages to use for form validation because they both have advantages and disadvantages based on the each situation.

    Read the article

  • Automatically revert to laptop screen when external monitor unplugged

    - by Ryan
    I regularly use an external monitor with my laptop, so when I use it, I usually have the laptop screen disabled when the monitor is connected, and this seems to cause problems when the monitor is disconnected. If the monitor is connected while the laptop screen is disabled, I can't get the X session to show up at all: I can Ctrl+Alt+F1 to open a terminal, and that works fine.. ..but Ctrl+Alt+F7 does nothing. The display is blank, and stays blank. The same thing happens whether I put the computer to sleep with the monitor connected, or if I disconnect while the computer is still awake. Rebooting the computer fixes the issue, as does killing Xorg and starting it again, but both of those are sub-optimal since I lose my current session. I'm currently using the open source graphics driver (xserver-xorg-video-ati). This question looks like it might answer my question, but unfortunately hwinfo is no longer available in the apt repository. Is there a way with current tools to automatically detect when the external monitor is disconnected and switch to the laptop display?

    Read the article

  • When writing tests for a Wordpress plugin, should i run them inside wordpress or in a normal browser?

    - by Nicola Peluchetti
    I have started using BDD for a wordpress plugin i'm working on and i'm rewriting the js codebase to do tests. I've encountered a few problems but i'm going steady now, i was wondering if i had the right approach, because i'm writing test that should pass in a normal browser environment and not inside wordpress. I choose to do this because i want my plugin to be totally indipendent from the wordpress environment, i'm using requirejs in a way that i don't expose any globals and i'm loading my version of jQuery that doesn't override the one that ships with Wordpress. In this way my plugin would work the same on every wordpress version and my code would not break if they cheange the jQuery version or someone use my plugin on an old wordpress version. I wonder if this is the right approach or if i should always test inside the environment i'm working in. Since wordpress implies some globals i had to write some function purely for testing purpose, like "get_ajax_url": function() { if( typeof window.ajaxurl === "undefined" ) { return "http://localhost/wordpress/wp-admin/admin-ajax.php"; } else { return window.ajaxurl; } }, but apart from that i got everything working right. What do you think?

    Read the article

  • How to force Multiple Monitors correct resolutions for LightDM?

    - by Hanynowsky
    I am affected by the BUG: https://bugs.launchpad.net/ubuntu/+source/unity-greeter/+bug/874241 Otherwise, if like me you have a laptop connected to a second monitor of higher resolution, LIGHTDM at the login stage, mirrors the displays in both screens and assign to them a common resolution (1024X768) in my case, instead of extending the desktop (Primary screen with the greeter and secondary with just a logo as mentioned in the Multiple Monitors UX specifications book for 12.04). Here is my xrandr -q @L502X:~$ xrandr -q Screen 0: minimum 320 x 200, current 1920 x 1848, maximum 8192 x 8192 LVDS1 connected 1366x768+309+1080 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 287mm 1920x1080 60.0*+ 1600x1200 60.0 1680x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x960 60.0 1280x800 59.8 1024x768 60.0 800x600 60.3 56.2 640x480 60.0 DP1 disconnected (normal left inverted right x axis y axis) I tried to force lightdm to execute some xrandr commands in order to set the right resolution for each monitor and extend the desktop, but I get a LOW GRAPHICS MODE ERROR (You're running in low graphics mode, your screen, input devices...did not get detected..) I created a simple script named lightdmxrand.sh: #!/bin/sh xrandr --output HDMI1 --primary --mode 1920x1080 --output LVDS1 --mode 1366x768 --below HDMI1 And told lightdm to run it : /etc/lightdm/lightdm.conf [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-setup-script=/usr/bin/numlockx on display-setup-script=/home/hanynowsky/lightdmxrandr.sh Someone knows what is wrong!? Thanks in advance.

    Read the article

  • How to add display resolution fo an LCD in Ubuntu 12.04? xrandr problem

    - by SeregaI
    I am fresh for Ubuntu and Linux in general. I have installed Ubuntu 12.04 and stuck trying to setup correct resolution for my LCD display. The native resolution for the LCD is 1920x1080 here is the output from xrandr: $xrandr Screen 0: minimum 320 x 200, current 1280 x 720, maximum 4096 x 4096 LVDS1 connected 1280x720+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1280x720 60.0*+ 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) Then I create new modeline: $ cvt 1920 1080 60 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync So far so good. then I create new mode using xrandr: $ xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync But for some reason that new mode was created for VGA (VGA1) output instead of LCD output (LVDS1): $ xrandr Screen 0: minimum 320 x 200, current 1280 x 720, maximum 4096 x 4096 LVDS1 connected 1280x720+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1280x720 60.0*+ 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) 1920x1080_60.00 (0xbc) 173.0MHz <---------- ????!!!!!! h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.2KHz v: height 1080 start 1083 end 1088 total 1120 clock 60.0Hz So, if I try to add mode to LVDS1, I get an error: $ xrandr --addmode LVDS1 "1920x1080_60.00" X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 149 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 25 Current serial number in output stream: 26 Adding that new mode to VGA1 works fine, but I don't use that VGA1 output.

    Read the article

  • Is there a way to make catalyst driver work in Trusty for the radeon hd4330?

    - by Laurent BERNABE
    Though official Catalyst software 13.1 is suitable for ati radeon hd4330, it can't be installed on Ubuntu 14.04 as it can't support Xorg = 7.6 As I need proprietary drivers for trusty, I would like to know if there is a way to bypass this limitation ? (For example by fetching driver sources) Here are some results from the terminal : $ Xorg -version X.Org X Server 1.15.1 Release Date: 2014-04-13 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu Current Operating System: Linux bordeaux80 3.13.0-27-generic #50-Ubuntu SMP Thu May 15 18:06:16 UTC 2014 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-27-generic root=UUID=4015e6f7-d11a-45fd-ac9b-5b6c7ab9eaa0 ro quiet splash vt.handoff=7 Build Date: 16 April 2014 01:36:29PM xorg-server 2:1.15.1-0ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. $ xrandr Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 LVDS connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 353mm x 198mm 1366x768 60.0*+ 1280x720 59.9 1152x768 59.8 1024x768 59.9 800x600 59.9 848x480 59.7 720x480 59.7 640x480 59.4 VGA-0 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) $ uname -rp 3.13.0-27-generic x86_64 $ glxinfo | grep OpenGL OpenGL vendor string: X.Org OpenGL renderer string: Gallium 0.4 on AMD RV710 OpenGL core profile version string: 3.1 (Core Profile) Mesa 10.1.0 OpenGL core profile shading language version string: 1.40 OpenGL core profile context flags: (none) OpenGL core profile extensions: OpenGL version string: 3.0 Mesa 10.1.0 OpenGL shading language version string: 1.30 OpenGL context flags: (none) OpenGL extensions: Regards

    Read the article

  • How do I left-click a Java Application on a WeTab running Ubuntu 12.10? (workaround defect in Onboard)

    - by Kat Amsterdam
    I installed Ubuntu 12.10 on my weTab. Everything works perfectly (albeit slowly) and I can touch and use every application execpt ones written in Java. When I start any Java Application the touchscreen does not recognize the left click. I believe it's a problem in OnBoard (the onscreen keyboard) because when I touch the mouse icon on the OnBoard and then the Java Application the left click works. This is very cumbersome for every click to first hit OnBoard mouse icon and then button in the Java app I would like to click. It defeats the purpose of a touchscreen. The Java Application is definitly touchable as it's running on 10 other machines with Elo Touchscreen. How do I get Ubuntu to recognize the left click in a java application automatically when I touch the screen? Or a way to dignose this so I can make a clear bug report? This happens in all the desktop environments (Gnome/Unity, XFCE4 and LXDE) I tried with openjdk-6-* and openjdk-7-* Stats: WeTab 32GB 3G 2GB RAM Intel(R) Atom(TM) CPU N450 @ 1.66GHz - 64-bit Ubuntu 12.10 - 64 bit Unity Desktop environment Xubuntu Desktop environment Lubuntu Desktop environment The real touchscreen driver from EETI (eGalaxy) (also didn't work with the Ubuntu standard touchscreen driver)

    Read the article

  • How do you QA and release software quickly with a large team?

    - by sadadasd
    My work used to be a smaller team. We had less than 13 devs for a while. We are now growing rapidly, and are over 20 with plans to be over 30 in a few months. Our process for QA'ing and releasing each build is no longer working. We currently have everyone develop the new code, and stick it onto a staging environment. A few days before our weekly release, we would freeze the staging environment and QA everything. By our normal release time, everything was usually deemed acceptable and pushed out the door to the main site. We reached a point where our code got too big so we could no longer regress the entire site each week in QA. We were ok with that, we just made a list of everything important and only covered that and the new stuff. Now we are reaching a point where all the new stuff each week is becoming too big and too unstable. Our staging environment is really buggy week after week, and we are usually 1-2 hours behind the normal release time. As the team is growing further, we are going to drown with this same process. We are re-evaluating everything, and I personally am looking for suggestions / success stories. Many companies have been where before and progressed beyond, we need to do the same

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • A key principle of Scrum...

    - by AndyScott
    "A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements." I have been working in a SCRUM environment, with 4-6 week cycles, for about 6 months now and have been very pleased with the impact that it has had on my life (regular work hours, seeing my family, etc).  But was looking up the criteria for a 'Certified Scrum Master' and came across the SCRUM definition on Wikipedia, and started reading the actual definition.  My first thought was "hey, this development methodology actually allows you to deal with what happens in the real world (i.e. customers changing requirements); but is this "selling out" on solid requirements? I understand that this works in the environment that I am currently working in, where there are deep pockets paying the bills, and also making the descisions on what requirements to change / impliment; but is this a recepie for success in smaller or simply more budget concious environments?  Having the ability to be completely flexible when the client wants the product changed.   The more I think about it, the more I feel that SCRUM development may be better suited for an environment where a team is taking over a project from another team (bringing some outside development in-house or something of that ilk), as opposed to ground up development. What do you think?

    Read the article

  • Web Applications Desktop Integrator (WebADI) Feature for Install Base Mass Update in 12.1.3

    - by LuciaC
    Purpose The integration of WebADI technology with the Install Base Mass Update function is designed to make creation and update of bulk item instances much easier than in the past. What is it? WebADI is an Excel-based desktop application where users can download an Excel template with item instances pre-populated based on search criteria.  Users can create and update item instances in the Excel sheet and finally upload the Excel data using an "upload" option available in the Excel menu. On upload, the modified data will bulk upload to interface tables which are processed by an asynchronous concurrent program that users can monitor for the uploaded results. Advantage: This allows users to work in a disconnected Environment: session time outs can be avoided, as once a template is downloaded the user can work in a disconnected environment and once all updates are done the new input can be uploaded. Also the data can be saved for later update and upload. For more details review the following: R12.1.3 Install Base WebADI Mass Update Feature (Doc ID 1535936.1) How To Use Install Base WebADI Mass Update Feature In Release 12.1.3 (Doc ID 1536498.1).

    Read the article

  • <msbuild/> task fails while <devenv/> succeeds for MFC application in CruiseControl.NET?

    - by ee
    The Overview I am working on a Continuous Integration build of a MFC appliction via CruiseControl.net and VS2010. When building my .sln, a "Visual Studio" CCNet task (<devenv/>) works, but a simple MSBuild wrapper script (see below) run via the CCNet <msbuild/> task fails with errors like: error RC1015: cannot open include file 'winres.h'.. error C1083: Cannot open include file: 'afxwin.h': No such file or directory error C1083: Cannot open include file: 'afx.h': No such file or directory The Question How can I adjust the build environment of my msbuild wrapper so that the application builds correctly? (Pretty clearly the MFC paths aren't right for the msbuild environment, but how do i fix it for MSBuild+VS2010+MFC+CCNet?) Background Details We have successfully upgraded an MFC application (.exe with some MFC extension .dlls) to Visual Studio 2010 and can compile the application without issue on developer machines. Now I am working on compiling the application on the CI server environment I did a full installation of VS2010 (Professional) on the build server. In this way, I knew everything I needed would be on the machine (one way or another) and that this would be consistent with developer machines. VS2010 is correctly installed on the CI server, and the devenv task works as expected I now have a wrapper MSBuild script that does some extended version processing and then builds the .sln for the application via an MSBuild task. This wrapper script is run via CCNet's MSBuild task and fails with the above mentioned errors The Simple MSBuild Wrapper <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Build"> <!-- Doing some versioning stuff here--> <MSBuild Projects="target.sln" Properties="Configuration=ReleaseUnicode;Platform=Any CPU;..." /> </Target> </Project> My Assumptions This seems to be a missing/wrong configuration of include paths to standard header resources of the MFC persuasion I should be able to coerce the MSBuild environment to consider the relevant resource files from my VS2010 install and have this approach work. Given the vs2010 msbuild support for visual c++ projects (.vcxproj), shouldn't msbuilding a solution be pretty close to compiling via visual studio? But how do I do that? Am I setting Environment variables? Registry settings? I can see how one can inject additional directories in some cases, but this seems to need a more systemic configuration at the compiler defaults level. Update 1 This appears to only ever happen in two cases: resource compilation (rc.exe), and precompiled header (stdafx.h) compilation, and only for certain projects? I was thinking it was across the board, but indeed it appears only to be in these cases. I guess I will keep digging and hope someone has some insight they would be willing to share...

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Buy or Build for web deployment?

    - by Cannonade
    I have been evaluating the wide range of installation and web deployment solutions available for Windows applications. I will just clarify here (without too much detail, these tools have been covered in other questions) my understanding of the options: NSIS - Free tool that generates setup executables. Small binary. Specialized, sometimes obtuse, scripting language. Inno Setup - Free tools for setup executables. Various binary compression schemes. Pascal scripting engine. WIX - Free toolset to generate MSI binaries. XML definitions language. WIX ClickThrough - Additional tools for packaging, web download and auto update detection (now part of WIX core). InstallShield - Commercial development environment for installation packaging. Generates MSI binaries. C-like InstallScript language. Wise - Commercial development environment for installation packaging. Generates MSI binaries. ClickOnce - Visual Studio supported framework for publishing applications to a webserver, with automatic detection of updates. No support for custom installation requirements (INI files, registry etc ...). Packages setup as an MSI binary. Install Aware - Commercial development environment for installation. Generates MSI binaries. Automatic Update framwork (Web Update). If I have missed any, please let me know. And found some useful discussions of these technologies on StackOverflow: Best Simple Install System Best choice for Windows installers Alternatives to ClickOnce I have worked with a few of these solutions, as well as a handful of proprietary internal installation solutions. They are mostly concerned with packing installations and providing a framework for developers to access the run time environment. With the growing requirement for web deployment and automatic software updates, I expected to find more of a consensus among developers on a framework for web delivery of software and subsequent updates, I haven't really found that consensus. There are certainly solutions available (ClickOnce, ClickThrough, InstallShield Update Service), but they each have considerable limitations (please correct me if I mis-represent any of these). I would be interested in a framework that provided some of the following: Third party hosting/management of updates. Access to client environment (INI files, registry, etc..). User registration/activation. Feedback/Error reporting This is leaving me with the strong impression that the best way to approach the web deployment problem is through a custom built proprietary solution (possibly leveraging existing installer packaging). I have seen this sort of solution work well for a number of successful applications: FileZilla - HTTP request to update.filezilla-project.org to check for updates, downloads an NSIS binary (I think) and then shuts down to run the install.

    Read the article

  • Doubt with c# handlers?

    - by aF
    I have this code in c# public void startRecognition(string pName) { presentationName = pName; if (WaveNative.waveInGetNumDevs() > 0) { string grammar = System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Presentations\\" + presentationName + "\\SpeechRecognition\\soundlog.cfg"; /* if (File.Exists(grammar)) { File.Delete(grammar); } executeCommand();*/ recContext = new SpSharedRecoContextClass(); recContext.CreateGrammar(0, out recGrammar); if (File.Exists(grammar)) { recGrammar.LoadCmdFromFile(grammar, SPLOADOPTIONS.SPLO_STATIC); recGrammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); recGrammar.SetRuleIdState(0, SPRULESTATE.SPRS_ACTIVE); } recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); //recContext.RecognitionForOtherContext += new _ISpeechRecoContextEvents_RecognitionForOtherContextEventHandler(handleRecognition); //System.Windows.Forms.MessageBox.Show("olari"); } } private void handleRecognition(int StreamNumber, object StreamPosition, SpeechLib.SpeechRecognitionType RecognitionType, SpeechLib.ISpeechRecoResult Result) { System.Windows.Forms.MessageBox.Show("entrei"); string temp = Result.PhraseInfo.GetText(0, -1, true); _recognizedText = ""; foreach (string word in recognizedWords) { if (temp.Contains(word)) { _recognizedText = word; } } } public void run() { if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt")) { deserializer = new XmlSerializer(_identifiedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\identifiedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _identifiedVoices = (double[])o; } if (File.Exists(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt")) { deserializer = new XmlSerializer(_deletedVoices.GetType()); FileStream fs = new FileStream(System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Serialization\\Voices\\deletedVoicesDLL.txt", FileMode.Open); Object o = deserializer.Deserialize(fs); fs.Close(); _deletedVoices = (ArrayList)o; } myTimer.Interval = 5000; myTimer.Tick += new EventHandler(clearData); myTimer.Start(); if (WaveNative.waveInGetNumDevs() > 0) { _waveFormat = new WaveFormat(_samples, 16, 2); _recorder = new WaveInRecorder(-1, _waveFormat, 8192 * 2, 3, new BufferDoneEventHandler(DataArrived)); _scaleHz = (double)_samples / _fftLength; _limit = (int)((double)_limitVoice / _scaleHz); SoundLogDLL.MelFrequencyCepstrumCoefficients.calculateFrequencies(_samples, _fftLength); } } startRecognition is a method for Speech Recognition that load a grammar and makes the recognition handler here: recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); Now I have a problem, when I call the method startRecognition before method run, both handlers (the recognition one and the handler for the Tick) work well. If a word is recognized, handlerRecognition method is called. But, when I call the method run before the method startRecognition, both methods seem to run well but then the recognition Handler is never executed! Even when I see that words are recognized (because they happear on the Windows Speech Recognition app). What can I do for the recognition handler be allways called?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >