Search Results

Search found 24715 results on 989 pages for 'output parameters'.

Page 250/989 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • How to set up Mod_WSGI for Python on Ubuntu

    - by AutomatedTester
    Hi, I am trying to setup MOD_WSGI on my Ubuntu box. I have found steps that said I needed to do the following steps I found at http://ubuntuforums.org/showthread.php?t=833766 sudo apt-get install libapache2-mod-wsgi sudo a2enmod mod-wsgi sudo /etc/init.d/apache2 restart sudo gedit /etc/apache2/sites-available/default and update the Directory <Directory /var/www/> Options Indexes FollowSymLinks MultiViews ExecCGI AddHandler cgi-script .cgi AddHandler wsgi-script .wsgi AllowOverride None Order allow,deny allow from all </Directory> sudo /etc/init.d/apache2 restart Created test.wsgi with def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Step 2 fails because it says it can't find mod-wsgi even though the apt-get found it. If I carry on with the steps the python app just shows as plain text in a browser. Any ideas what I have done wrong? EDIT: Results for questions asked automatedtester@ubuntu:~$ dpkg -l libapache2-mod-wsgi Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-======================================-======================================-============================================================================================ ii libapache2-mod-wsgi 2.5-1 Python WSGI adapter module for Apache automatedtester@ubuntu:~$ dpkg -s libapache2-mod-wsgi Package: libapache2-mod-wsgi Status: install ok installed Priority: optional Section: python Installed-Size: 376 Maintainer: Ubuntu MOTU Developers <[email protected]> Architecture: i386 Source: mod-wsgi Version: 2.5-1 Depends: apache2, apache2.2-common, libc6 (>= 2.4), libpython2.6 (>= 2.6), python (>= 2.5), python (<< 2.7) Suggests: apache2-mpm-worker | apache2-mpm-event Conffiles: /etc/apache2/mods-available/wsgi.load 06d2b4d2c95b28720f324bd650b7cbd6 /etc/apache2/mods-available/wsgi.conf 408487581dfe024e8475d2fbf993a15c Description: Python WSGI adapter module for Apache The mod_wsgi adapter is an Apache module that provides a WSGI (Web Server Gateway Interface, a standard interface between web server software and web applications written in Python) compliant interface for hosting Python based web applications within Apache. The adapter provides significantly better performance than using existing WSGI adapters for mod_python or CGI. Original-Maintainer: Debian Python Modules Team <[email protected]> Homepage: http://www.modwsgi.org/ automatedtester@ubuntu:~$ sudo a2enmod libapache2-mod-wsgi ERROR: Module libapache2-mod-wsgi does not exist! automatedtester@ubuntu:~$ sudo a2enmod mod-wsgi ERROR: Module mod-wsgi does not exist! FURTHER EDIT FOR RMYates automatedtester@ubuntu:~$ apache2ctl -t -D DUMP_MODULES apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_worker_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgid_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) python_module (shared) setenvif_module (shared) status_module (shared) Syntax OK automatedtester@ubuntu:~$

    Read the article

  • XAML2CPP 1.0.2.0

    - by Valter Minute
    A new updated release of everybody favourite XAML to CPP conversion tool (at least because it’s the only one available!). New features: - support for resource dictionaries (app.xaml if you use Blend to generate your XAML) Bugfixes: - the parameters for the mouseleftbuttondown and up events were incorrect As usual you can download the new release here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/XAML2CPP.zip Technorati Tags: XAML,Silverlight for Windows Embedded

    Read the article

  • Flash in browsers does not play sound accurately using Pulse network audio

    - by Dave M G
    I use PulseAudio to send sound over the LAN to an audio server. When playing any Flash media in Firefox or Chrome, the sound flutters, as if the volume were going up and down every second. The problem does not exhibit with any other software, and I think it's specific to how Flash interacts with my sound set up. How do I get Flash to play nice with the PulseAudio network sound server? Update I have discovered that I can stop the sound fluttering if I follow these steps: Start a Flash video Run pulseaudio --kill on the server Wait about 7 seconds After this, the PulseAudio server automatically respawns, and the sound in the Flash video is perfect. The problem now, though, is that I have to do this every time I start a Flash video. This is obviously not desireable. So, the question is, how do I make whatever it is that makes the sound work when I go through these steps stick so that I don't have to do them? Also, I've uploaded some PulseAudio log output to Pastebin, taken while attempting to play a Flash video, if that helps. I've tried to get logging details from Flash, but despite installing and enabling Flash for debugging, it has not generated any ouput at all. Details I have uploaded an example video of the problem onto Youtube. In the video you can see the opening of a Ted Talk video, and the sound flutters as it plays. The video also stutters while playing back. Here are my sound device output settings:

    Read the article

  • LVS TCP connection timeouts - lingering connections

    - by Jon Topper
    I'm using keepalived to load-balance connections between a number of TCP servers. I don't expect it matters, but the service in this case is rabbitmq. I'm using NAT type balancing with weighted round-robin. A client connects to the server thus: [client]-----------[lvs]------------[real server] a b If a client connects to the LVS and remains idle, sending nothing on the socket, this eventually times out, according to timeouts set using ipvsadm --set. At this point, the connection marked 'a' above correctly disappears from the output of netstat -anp on the client, and from the output of ipvsadm -L -n -c on the lvs box. Connection 'b', however, remains ESTABLISHED according to netstat -anp on the real server box. Why is this? Can I force lvs to properly reset the connection to the real server?

    Read the article

  • Google Search Parameter Question

    - by Brian
    I've been trying to determine different parameters used by Google in their search queries. In particular, the usg parameter is what is giving me troubles. Here is an example value given for it, which is from an actual Google query: usg=0_zDqudnCN52ATGjAl3tignXNtBo4%3D Does anyone know what it could be for / recognize it? I've done a bit of digging, but haven't found any confirmation as to what it could be. Here is the link that I took a look at: http://www.webmasterworld.com/google/3892573.htm

    Read the article

  • xrandr doesn’t detect display ports

    - by Psyhister
    I have a ThinkPad T510 laptop with Gentoo Linux installed on it and I can’t manage to get VGA and DisplayPort working. xrandr -q won’t show them, so I’m guessing, that there’s a problem with my kernel configuration, but I wasn’t able to find the options responsible for these ports. Here’s the output from xrandr -q: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 175, current 1366 x 768, maximum 1366 x 768 default connected 1366x768+0+0 0mm x 0mm 1366x768 50.0* 51.0 52.0 1024x768 53.0 54.0 832x624 55.0 800x600 56.0 57.0 58.0 59.0 60.0 720x400 61.0 700x525 62.0 640x512 63.0 64.0 640x480 65.0 66.0 67.0 68.0 69.0 640x400 70.0 640x350 71.0 576x432 72.0 512x384 73.0 74.0 75.0 76.0 77.0 416x312 78.0 400x300 79.0 80.0 81.0 82.0 83.0 360x200 84.0 320x240 85.0 86.0 87.0 88.0 320x200 89.0 320x175 90.0 Can anyone help me figure out what the problem is and how to get the video connections to work?

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Properly escaping forward slash in bash script for usage with sed

    - by user331839
    I'm trying to determine the size of the files that would be newly copied when syncing two folders by running rsync in dry mode and then summing up the sizes of the files listed in the output of rsync. Currently I'm stuck at prefixing the files by their parent folder. I found out how to prefix lines using sed and how to escape using sed, but I'm having troubles combining those two. This is how far I got: source="/my/source/folder/" target="/my/target/folder/" escaped=`echo "$source" | sed -e 's/[\/&]/\\//g'` du `rsync -ahnv $source $target | tail -n +2 | head -n -3 | sed "s/^/$escaped/"` | awk '{i+=$1} END {print i}' This is the output I get from bash -x myscript.sh + source=/my/source/folder/ + target=/my/target/folder ++ echo /my/source/folder/ ++ sed -e 's/[\/&]/\//g' + escaped=/my/source/folder/ + awk '{i+=$1} END {print i}' ++ rsync -ahnv /my/source/folder/ /my/target/folder/ ++ sed 's/^//my/source/folder//' ++ head -n -3 ++ tail -n +2 sed: -e expression #1, char 8: unknown option to `s' + du 80268 Any ideas on how to properly escape would be highly appreciated.

    Read the article

  • code metrics for .net code

    - by user20358
    While the code metrics tool gives a pretty good analysis of the code being analyzed, I was wondering if there was any such benchmark on acceptable standards for the following as well: Maximum number of types per assembly Maximum number of such types that can be accessible Maximum number of parameters per method Acceptable RFC count Acceptable Afferent coupling count Acceptable Efferent coupling count Any other metrics to judge the quality of .Net code by? Thanks for your time.

    Read the article

  • Recreating OMS instances in a HA environment when instances on all nodes are lost

    - by rnigam
    Oracle highly recommends deploying EM in a HA environment. The best practices for HA deployments, backup and housekeeping of your Enterprise Manager environment are documented in the Oracle Enterprise Manager Advanced Configuration Guide. It is imperative that there is a good disaster recovery plan in place for your EM deployment. In this post I want to talk about a customer who failed to do the correct planning and housekeeping for EM and landed in a situation where we the all the OMSes were nearly blown away had we not jumped to help. We recently hit an issue at a customer site where we had a two node OMS setup of the Enterprise Manager and a RAC Database being used as the EM repository. An accidental delete of the OMS oracle home left us with a single node deployment. While we were trying to figure out a possible path to recover the first node, the second node was rebooted under a maintenance window. What followed was a complete site outage as the Admin and managed servers would not start on either of the nodes. In my situation there were - No backups of the Oracle Homes from any node - No OMS Configuration snapshots (created using the “emctl exportconfig oms” command) and the instance home was completely lost on node 1 which also had the Admin Server  We did however have: - A copy of the emkey.ora that I found under the OMS_ORACLE_HOME/ of the second node (NOTE: it is a bad practice to have your emkey present under the OMS Oracle home directory on the same server as the OMS. The backup of the emkey should be maintained on some other server. In this case however it was a savior in my situation since there were no backups - The oms oracle home on the second node but missing a number of files and had a number of changes done to the files in the home. There were a number of attempts to start the server by modifying various files based on the Weblogic server logs to have atleast node up and running but all of them failed. Here is how you can recover from this scenario: Follow these steps: STEP 1: Check status of emkey.ora Check whether the emkey exists is present in the EM repository or not. Run the following command: $OMS_ORACLE_HOME/bin/emctl status emkey If the output is something like this below then you are good to go and the key is present in the repository ./emctl status emkey Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. Enter Enterprise Manager Root (SYSMAN) Password : The EMKey is configured properly. Here are the messages that you might see as the emctl status emkey output depending upon whether the EM Admin Server is up and if the key is configured properly: Case1:  AdminServer is up, emkey is proper in CredStore & not in repos. This is same as the output of the command shown above:The EMKey is configured properly Case 2: AdminServer is up, emkey is proper in CredStore & exists in repos:The EMKey is configured properly, but is not secure. Secure the EMKey by running "emctl config emkey -remove_from_repos".Case 3: AdminServer is down or emkey is corrupted in CredStore) & (emkey exists in repos): The EMKey exists in the Management Repository, but is not configured properly or is corrupted in the credential store.Configure the EMKey by running "emctl config emkey -copy_to_credstore".Case 4: (AdminServer is down or emkey is corrupted in CredStore) & (emkey does not exist in repos): The EMKey is not configured properly or is corrupted in the credential store and does not exist in the Management Repository. To correct the problem:1) Get the backed up emkey.ora file.2) Configure the emkey by running "emctl config emkey -copy_to_credstore_from_file". If not the key was not secured properly, we will have to be put in the repository before proceeding. Look at the next step 2 for doing this There may be cases (like mine) where running emctl may give errors like the following: $OMS_ORACLE_HOME/bin/emctl status emkey Exception in thread “Main Thread” java.lang.NoClassDefFoundError: oracle/security/pki/OracleWallet At oracle.sysman.emctl.config.oms.EMKeyCmds.main (EMKeyCmds.java:658) Just move to the next step to put the key back in the repository STEP 2: Put emkey.ora back in the repository Skip this step if your emkey.ora is present in the repository. If not, you need to put the key back in the repository See if you can run the following command (with sample output): $OMS_ORACLE_HOME/bin/emctl config emkey –copy_to_repos Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure. After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos". Typically the key is present under $OMS_ORACLE_HOME/sysman/config directory before being removed after the install as a best practice. If you hit any errors while running emctl commands like the one mentioned in step 1, jump to step 3 and we will take care of the emkey.ora in Step 5 STEP 3: Get the port information Check for the existing port information in the emd.properties file under EM_INSTANCE_DIRECTORY (typically gc_inst directory right above the Middleware home where you have deployed em. For eg. /u01/app/oracle/product/gc_inst in case your oms home is /u01/app/oracle/product/Middleware/oms11g) In my case I got the information from the emgc.properties present in the gc_inst on the second node. If you can run emctl you may want to try the following command as well $OMS_ORACLE_HOME/bin/emctl status oms –details Note this information as this will be used in the next step STEP 4: Perform cleanup on Node 1 Note the oracle home of the Weblogic and OMS, get the list of applied patches in the homes (using opatch lsinventory command), take a backup copy of the home just in case we need it and then de-install/remove oracle homes, update inventory and cleanup processes on the first node STEP 5: Perform Software Only Installation of OMS on Node 1 Perform Weblogic 10.3.2 installation exactly under the same location as present in the earlier installation. Perform software only installation of the OMS using the following command. This will not run any configuration assistants and bypass all user interface validations runInstaller –noconfig -validationaswarnings Select the “Additional OMS” option while performing the installation. Provide the same path for OMS and Instance directories like the previous installation Use the port information collected in Step 3 while performing the installation. Once the installation is complete run the allroot.sh script to complete the binary deployment STEP 6: Apply one-off patches At this point you can apply any patches to the OMS Oracle Home previously. You only need to run opatch to install the patch in the home and not required to run the SQLs STEP 7: Copy EM key This step is only required if you were not able to use emctl command to put the emkey back into the EM repository in STEP 2 Copy the emkey.ora file of the old installation you have under $OMS_ORACLE_HOME/sysman/config directory of the newly installed OMS STEP 8: Configure Grid Control Domain Run the following command to configure the EM domain and OMS. Note that you need to use a different GC Domain name than what you used earlier. For example I have used GCDOMAIN11 as the new domain name when my previous domain name was GCDOMAIN $OMS_ORACLE_HOME/bin/omsca new –AS_USERNAME weblogic –EM_DOMAIN_NAME GCDOMAIN11 –NM_USER nodemanager -nostart This command as shown below will prompt for a number of inputs like Admin Server hostname, port, password, etc. Verify if the defaults shown are correct by pressing enter or provide a new value STEP 9: Run Add-ON Configuration Assistant After this step run the following add-on configuration assistant. This was used in my case to configure the virtualization add-on $OMS_ORACLE_HOME/addonca -oui -omsonly -name vt -install gc STEP 10: Start the OMS Now start the OMS using $OMS_ORACLE_HOME/bin/emctl start oms In a multi-node setup like mine you would either have a software load balancer or DNS round robin (using a virtual host name that resolves to one of multiple OMS hostnames) being used for load balancing. Secure the OMS against the SLB or DNS virtual hostname using the following $ OMS_HOME/bin/emctl secure oms -host slb.example.com -secure_port 1159 -slb_port 1159 -slb_console_port 443 STEP 11: Configure the Agent From the $AGENT_ORACLE_HOME/bin run the ./agentca –f At this point you should have your OMS on node 1 fully re-covered. Clean up node 2 and use the normal Additional OMS installation process documented in the official installation guide to add the additional OMS on node 2 Summary It took us nearly a little over two days to completely recover the environment with some other non-EM related issues that hit us along the way as well. In the end a situation like this could have been completely avoided had the proper housekeeping and backup of the Enterprise Manager Deployment been done in the first place. This is going to a topic that we cover in the next post. In the meantime please do refer to the Oracle Enterprise Manager Advanced Configuration Guide for planning your EM installation, backup and housekeeping procedures. This can be found here: http://download.oracle.com/docs/cd/E11857_01/index.htm Thanks This post would not have been possible without Raj Aggarwal, Prasad Chebrolu and Ravikumar Basa who helped to recover the environment and provided all the support we needed

    Read the article

  • Problem starting Glassfish on a VPS

    - by Raydon
    I am attempting to install Glassfishv3 on my Ubuntu (8.04) VPS using Java 1.6. I initially tried starting the server using: asadmin start-domain and received the following error message: JVM failed to start: com.sun.enterprise.admin.launcher.GFLauncherException: The server exited prematurely with exit code 1. Before it died, it produced the following output: Error occurred during initialization of VM Could not reserve enough space for object heap Command start-domain failed. I attempted to run it again and received a different message: Waiting for DAS to start Error starting domain: domain1. The server exited prematurely with exit code 1. Before it died, it produced the following output: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. Command start-domain failed. If I run cat /proc/meminfo I get the following (all other values are 0kB): MemTotal: 1310720 kB MemFree: 1150668 kB LowTotal: 1310720 kB LowFree: 1150668 kB I have checked the contents of glassfish/glassfish/domains/domain1/config/domain.xml and the JVM setting is: -Xmx512m Any help on resolving this problem would be appreciated.

    Read the article

  • Bulk convert PNG-24 to PNG-8 files with best quality

    - by Gavin
    Hi, Can anybody recommend a good method of bulk converting a large amount of PNG-24 files to PNG-8 with as little loss of quality as possible and maintaining transparency? I've tried ImageMagick but the resulting images weren't quite as crisp quality as I'd like. Using Paint.NET I was able to achieve far better results, but I can't bulk process with this tool as far as I know. The settings I used with ImageMagick in case there's better options to use: convert file.png -depth 4 file-output.png I've also been playing with OptiPNG, but I haven't discovered a was of making sure the output images are PNG-8. Cheers, Gavin

    Read the article

  • ImageMagick failing to convert to JPG

    - by johnui
    Hi: We recently installed the latest version of ImageMagick onto our Linux server. I seem to be having issues performing the most basic of tasks. I am running this command line: /usr/bin/convert /location/to/source/design.ai /location/to/save/output.jpg Unfortunatly is saves design.jpg as an illustrator file (if I rename the file to output.ai it opens). Even if I do this: /usr/bin/convert /location/to/source/design.ai -rotate 90 /location/to/save/design.jpg It rotates the file and saves again as an illustrator document. This happens with all filetypes (e.g. png, bmp, etc...) It appears ImageMagick cannot figure out what I want it converted to and just saves as the same file type. Any ideas on fixing this? Regards: John

    Read the article

  • Bash on Snow Leopard doesn't obey terminal colours

    - by karbassi
    With the new version of Snow Leopard, OSX upgraded the bash version to GNU bash, version 3.2.48(1)-release (x86_64-apple-darwin10.0). Now, my .bashrc sets the following settings: # Colors export TERM=xterm-color export GREP_OPTIONS='--color=auto' GREP_COLOR='1;32' export CLICOLOR=1 export LSCOLORS=ExGxFxDxCxHxHxCbCeEbEb # Setup some colors to use later in interactive shell or scripts export COLOR_NC='\e[0m' # No Color export COLOR_WHITE='\e[1;37m' export COLOR_BLACK='\e[0;30m' export COLOR_BLUE='\e[0;34m' export COLOR_LIGHT_BLUE='\e[1;34m' export COLOR_GREEN='\e[0;32m' export COLOR_LIGHT_GREEN='\e[1;32m' export COLOR_CYAN='\e[0;36m' export COLOR_LIGHT_CYAN='\e[1;36m' export COLOR_RED='\e[0;31m' export COLOR_LIGHT_RED='\e[1;31m' export COLOR_PURPLE='\e[0;35m' export COLOR_LIGHT_PURPLE='\e[1;35m' export COLOR_BROWN='\e[0;33m' export COLOR_YELLOW='\e[1;33m' export COLOR_GRAY='\e[1;30m' export COLOR_LIGHT_GRAY='\e[0;37m' The colours are used later on for output. This used to work in previous version of OSX but not my output is as such: Some ideas that have not worked. Switching Terminal.app from 64-bit to 32-bit.

    Read the article

  • The SQL Server Reporting Services SDK for PHP Debuts

    - by The Official Microsoft IIS Site
    Microsoft has just released the SQL Server Reporting Services SDK for PHP, which enables PHP developers to easily create reports and integrate them in their web applications. The SDK offers a simple Application Programming Interface to interoperate with SQL Server Reporting Services, Microsoft's Reporting and Business Intelligence solution. Developers will be able to use the SDK to perform common operations like listing reports in PHP applications, providing custom report parameters from a PHP...(read more)

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • Getting macro keys from a razer blackwidow to work on linux

    - by Journeyman Geek
    I picked up a razer blackwidow ultimate that has additional keys meant for macros that are set using a tool that's installed on windows. I'm assuming that these arn't some fancypants joojoo keys and should emit scancodes like any other keys. Firstly is there a standard way to check these scancodes in linux? Secondly how do i set these keys to do things in command line and x based linux setups? My current linux install is xubuntu 10.10, but i'll be switching to kubuntu once i have a few things fixed up. Ideally the answer should be generic and system-wide Things i have tried so far: showkeys from the built in kbd package (in a seperate vt) - macro keys not detected xev - macro keys not detected lsusb and evdev output this ahk script's output suggests the M keys are not outputting standard scancodes Things i need to try snoopy pro + reverse engineering (oh dear) Wireshark - preliminary futzing around seems to indicate no scancodes emitted when what i seem to think is the keyboard is monitored and keys pressed. Might indicate additional keys are a seperate device or need to be initialised somehow. Need to cross reference that with lsusb output from linux, in 3 scenarios - standalone, passed through to a windows VM without the drivers installed, and the same with. LSUSB only detects one device on a standalone linux install It might be useful to check if the mice use the same razer synapse driver , since that means some variation of razercfg might work (not detected. only seems to work for mice) Things i have Have worked out: In a windows system with the driver, the keyboard is seen as a keyboard and a pointing device. And said pointing device uses, in addition to your bog standard mouse drivers.. a driver for something called a razer synapse. Mouse driver seen in linux under evdev and lsusb as well Single Device under OS X apparently, though i have yet to try lsusb equivilent on that Keyboard goes into pulsing backlight mode in OS X upon initialisation with the driver. This should probably indicate that there's some initialisation sequence sent to the keyboard on activation. They are, in fact, fancypants joojoo keys. Extending this question a little I have access to a windows system so if i need to use any tools on that to help answer the question, its fine. I can also try it on systems with and without the config utility. The expected end result is still to make those keys usable on linux however. I also realise this is a very specific family of hardware. I would be willing to test anything that makes sense on a linux system if i have detailed instructions - this should open up the question to people who have linux skills, but no access to this keyboard The minimum end result i require I need these keys detected, and usable in any fashion on any of the current graphical mainstream ubuntu varients

    Read the article

  • WCF client hell (2 replies)

    I've a remote service available via tcp://. When I add a service reference on my client project, VS doesn't create all proxy objects! I miss every xxxClient class, and I have only types used as parameters in my methods. I tried to start a new empty project, add the same service reference, and in this project I can see al proxy objects! It's an hell, what can I do? thanks

    Read the article

  • Writing algorithm on 2D data set in plain english

    - by Alexandre P. Levasseur
    I have started an introductory Java class and the material is absolutely horrendous and I have to get excellent grades to be accepted into the master's degree, hence my very beginner question: In my assignment I have to write algorithms (no pseudo-code yet) to solve a board game (Sudoku). Essentially, the notes say that an algorithm is specification of the input(s), the output(s) and the treatments applied to the input to get the output. My question lies on the wording of algorithms because I could probably code it but I can't seem to put it on paper in a coherent way. The game has a 9x9 board and one of the algorithms to write has to find the solution by looking at 3 squares (either horizontal or vertical) and see if one of the three sub-squares match the number you are looking for. If none match then the number you are looking to place is in one of the other 2 set of 3 sub-squares (see image to get a better idea). I really can't get my head around how to formulate the solution into the terms described above or maybe it's just too simple, here's what I was thinking: Input: A 2-dimensional set of data of size 9 by 9 to be solved and a number to search for. Ouput: A 2-dimensional set of data of size 9 by 9 either solved or partially solved. Treatment: Scan each set of 3x9 and 9x3 squares. For each line or column of a 3x3 square check if the number matches a line (or column). If it does then move to the next line (or column). If not then proceed to the next 3x3 square in the same line (or column). Rinse and repeat. Does that make sense as an algorithm written in plain english ? I'm not looking for an answer to the algorithm per se but rather on the formulation of algorithms in plain english.

    Read the article

  • How to Install vaio-control-center from source?

    - by KasiyA
    I have a problem about turning off keyboard backlight that I asked here with no useful answers. After searching on internet I find a package for vaio control center and downloaded it from here, I don't know how to install it. This is the output of trying one solution: USER@XXXXpc:~/vaio-control-center-0.1$ ls compile Makefile run vaio-control-center vcc COPYING moc_main_window.cpp ui_main_window.h vaio-control-center.pro USER@XXXXpc:~/vaio-control-center-0.1$ ./compile make: *** No rule to make target `/usr/share/qt/mkspecs/linux-g++-64/qmake.conf', needed by `Makefile'. Stop. USER@XXXXpc:~/vaio-control-center-0.1$ ./run ./run: 3: ./run: ./vaio-control-center: not found USER@XXXXpc:~/vaio-control-center-0.1$ Updated I tried also with @pandya's suggestion from here. and the output is as follows: root@user-pc:/# cd /home/user/vaio-f11-linux.control-center root@user-pc:/home/user/vaio-f11-linux.control-center# ls compile COPYING resource.qrc run sony-acpid vaio-control-center.pro vcc root@user-pc:/home/user/vaio-f11-linux.control-center# ./compile -su: ./compile: Permission denied root@user-pc:/home/user/vaio-f11-linux.control-center# sudo ./compile sudo: ./compile: command not found root@user-pc:/home/user/vaio-f11-linux.control-center# gksudo ./compile root@user-pc:/home/user/vaio-f11-linux.control-center# <----- nothing happened here root@user-pc:/home/user/vaio-f11-linux.control-center# ./run -su: ./run: Permission denied root@user-pc:/home/user/vaio-f11-linux.control-center# sudo ./run sudo: ./run: command not found root@user-pc:/home/user/vaio-f11-linux.control-center# gksudo ./run root@user-pc:/home/user/vaio-f11-linux.control-center# <----- nothing happened here root@user-pc:/home/user/vaio-f11-linux.control-center# and after running that I didn't see any affect on keyboard backlight.

    Read the article

  • WCF client hell (2 replies)

    I've a remote service available via tcp://. When I add a service reference on my client project, VS doesn't create all proxy objects! I miss every xxxClient class, and I have only types used as parameters in my methods. I tried to start a new empty project, add the same service reference, and in this project I can see al proxy objects! It's an hell, what can I do? thanks

    Read the article

  • Munin-cron fails "Nothing to do", possibly a munin.conf problem?

    - by geerlingguy
    I have been working on this for a few hours now, and haven't yet been able to get munin to output the html files/generated graphs of resource usage on my CentOS 5.3 server. Here are some things I run as the munin user, and the results: /usr/share/munin/munin-update --nofork --debug (above works fine, takes ~2.4 seconds to complete) munin-run cpu (And other options/plugins (besides 'cpu'), all work fine and give desired output) munin-cron Fails with: [FATAL] There is nothing to do here, since there are no nodes with any plugins. Please refer to http://munin-monitoring.org/wiki/FAQ_no_graphs at /usr/share/munin/munin-html line 38 I am wondering if, perhaps, the settings in my munin.conf file might be causing a problem. Here's the contents of that file (below): bdir /var/lib/munin/ htmldir /home/archdev/public_html/monitoring logdir /var/log/munin rundir /var/run/munin/ tmpldir /etc/munin/templates [archstl.archstl.org] address 127.0.0.1 use_node_name yes Also, when I run the telnet localhost 4949 command, and list the node's plugins, it returns the default munin list... something seems to be wrong with the munin html creation process. :(

    Read the article

  • Virtual audio driver for Windows?

    - by Ognjen
    Is there any (possibly free or open-source) virtual WDM audio driver for Windows, with additional processing plugins, which would add one more layer between windows applications and actual sound card's WDM audio driver, allowing to: Add software DSPs to general audio output. I would like to be able to use custom effects, like compressor, or stereophonic-to-binaural converter for listening online's streaming media on headphones, etc. Connect its output to some custom buffer instead of the sound card. For example, to be able to record audio, or to send audio via wireless connection to some other wireless source? Virtual audio driver was just my idea how to solve these issues - if you know other way, please share your knowledge. I need this for Windows 7 and/or Windows XP.

    Read the article

  • Five Bucks says you’ll Bookmark this Site: jsFiddle.net

    - by SGWellens
    In my never-ending wandering of technical web sites, I've been encountering links to jsFiddle.net more and more. Why? Because it is an incredibly useful site: It is a great 'sandbox' to play in. You can test, modify and retest HTML, CSS, and JavaScript code. It is a great way to communicate technical issues and share code samples. There are four screen areas: Three inputs* and one output: The three inputs are: HTML CSS JavaScript The output is: The rendered result Here's a cropped screen shot: What am I thinking? Here's the actual page: Demo *There are other inputs. You can select the level of HTML you want to run against (HTM5, HTML4.01 Strict, etc). You can add various versions of JavaScript libraries (jQuery, MooTools, YUI, etc.). Many other options are available. If I wanted to share this code with someone manually, they would have to copy and paste three separate code chunks into their development environment. And maybe load some external libraries. Not many people are willing to make such an effort. Instead, with jsFiddler, they can just go to the link and click Run. Awesome. I hope someone finds this useful (and I was kidding about the five bucks). Steve Wellens CodeProject

    Read the article

  • How to perform feature upgrade in SharePoint2010 part2

    - by ybbest
    In my last post, I showed you how to perform feature upgrade and upgrade my feature from 0.0.0.0 to 1.0.0.1. In this post, I’d like to continue on this topic and upgrade the feature again. For the first version of my solution, I deploy a document library with a custom document set content type and then upgrade the solution so I index the application number column. Now , I will create a new version of the solution so that it will remove the threshold of the list. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. The version 1.1 will be in the Upgrade folder and version 1.2 will be in the Upgrade2 folder. You need to make the following changes to the existing solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. </pre> public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); switch (upgradeActionName) { case "IndexApplicationNumber": if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; case "RemoveListThreshold": applicationLibrary.EnableThrottling = false; applicationLibrary.Update(); break; } } <pre> 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.2" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="1.0.0.1" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" References: Feature upgrade.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >