Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 79/312 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Problems deploying Sinatra app to staging environment

    - by chris
    I have a small Sinatra app with both a staging and production environment on a single server with running Nginx. To deploy I am using Capistrano and capistrano-ext to easily deploy to different locations. The problem that the staging environment always runs with the production configuration specified within the app.rb file. configure :staging do # staging settings set :foo, "bar" end configure :production do # prod settings set :foo, "rab" end I have come to the conclusion that the capistrano :environment variable within the deploy.rb file doesn't config Sinatra in any way. I have also tried setting the ENV["RACK_ENV"] to "staging" to no avail. config/deploy/staging.rb server "10.10.100.16", :app, :web, :db, :primary => true set :deploy_to, "/var/www/staging.my_app" set :environment, "staging" set :env, "staging" ENV["RACK_ENV"] = "staging" Any ideas?

    Read the article

  • gitk without X11 [closed]

    - by svnpenn
    It has been noted here that Tcl/Tk, and in turn gitk now require X11 under Cygwin. Having run it before and after this change it seems like extreme overkill. I use gitk very lightly, mostly sticking to simply command line git. How could I go about using gitk without X11, perhaps manually installing old version of Tcl/Tk? After some tinkering, I came up with this script that allows gitk without X11 #!/bin/sh # Requires Cygwin packages: git, make, mingw64-i686-gcc-core, wget # Install Tcl wget prdownloads.sf.net/tcl/tcl8.5.12-src.tar.gz tar xf tcl8.5.12-src.tar.gz cd tcl8.5.12/win ./configure --host i686-w64-mingw32 make install cd - # Install Tk wget prdownloads.sf.net/tcl/tk8.5.12-src.tar.gz tar xf tk8.5.12-src.tar.gz cd tk8.5.12/win ./configure --host i686-w64-mingw32 make install cd - # Install gitk cd /usr/local/bin wget raw.github.com/git/git/master/gitk-git/gitk chmod 700 gitk echo 'cygpath -m "$1" | xargs -I% wish85 % -- ${@:3}' > wish cd -

    Read the article

  • Generating link-time error for deprecated functions

    - by R..
    Is there a way with gcc and GNU binutils to mark some functions such that they will generate an error at link-time if used? My situation is that I have some library functions which I am not removing for the sake of compatibility with existing binaries, but I want to ensure that no newly-compiled binary tries to make use of the functions. I can't just use compile-time gcc attributes because the offending code is ignoring my headers and detecting the presence of the functions with a configure script and prototyping them itself. My goal is to generate a link-time error for the bad configure scripts so that they stop detecting the existence of the functions.

    Read the article

  • Control the MultipleOutputFormat files sub-path

    - by iCode
    I need to control the sub-path of the different different files being managed by MultipleOutputFormat based on the reducer key. I basically want to set the sub path of the file based on the key given to the reducer. I can changed the file name by overwrting the generateFileNameForKeyValue method of MultipleOutputFormatbut how can I also change the sub-path of these files? I mean with just overriding the generateFileNameForKeyValue, I get mySetJobConfigOutputPath/fileNameBasedKey1.dat /fileNameBasedKey2.dat /fileNameBasedKey3.dat ... but I want to make it to be organize files like below mySetJobConfigOutputPath/path0ConfiguredInsideReducerBasedOnKey/fileNameBasedKey1.dat /path1ConfiguredInsideReducerBasedOnKey/fileNameBasedKey2.dat /fileNameBasedKey3.dat /path2ConfiguredInsideReducerBasedOnKey/fileNameBasedKey8.dat as seen, the sub-path and the file name are both figured out by the key inside the reducer. I know how to configure the file name but was wondering if I can configure the sub-path of the each file under the mySetJobConfigOutputPath folder?

    Read the article

  • Jetty: Mount to a directory on a different host

    - by jettyQuestion
    I'm looking to map to a directory on a different host using Jetty/Maven when working locally. I've found you can do this w/ Apache using mod_jk (JkMount/JkUnMount), but haven't figured it how to do the same on jetty. On our dev/q/live servers, we have Apache in front of JBoss and use mod_jk to do this. Locally, we're using jetty To give you an idea of what I'm talking about, this is how you would configure Apache to accomplish this: in httpd.conf: JkMount /images/* host2 JkMount /* host2 JkUnMount /images/* host1 workers.properties: worker.list=host2,host1 worker.host2.host=host-2.theDomain.com worker.host2.port=46654 worker.host1.host=host-1.theDomain.com worker.host1.port=46655 Is there a way to configure Jetty to do the same thing? Btw, locally, I'm using the Maven plugin for Eclipse if that makes a difference. thanks!

    Read the article

  • Facing problem in configuring Reporting Server

    - by idrees99
    Hi all, I am using Sql server 2005 express edition and i want to Install and configure Reporting server on my local machine.Now i have installed the reporting server but the issue is that i am unable to configure it properly.when ever i go to start the reporting services it gives me the following message: THE SQL SERVER REPORTING SERVICE(SQLEXPRESS)service on Local computer started and then stopped. Some services stop automatically if they have no work to do, for example, the performance Logs and Alerts service. I am using WindowsXp Professional. plz help me out as i have just started using sql server and i dont have any idea.

    Read the article

  • Timezone settings in MySQL - Using NOW()?

    - by matt74tm
    SOmewhat related to Doing calculations in MySQL vs PHP Right now, our database assumes that the system time is in UTC and uses that to calculate NOW(). PHP explicitly sets the timezone as UTC (so its impervious to server time zone shifts). An accidental shift of timezones on the server messed this relationship up at the database level and i'm now trying to figure out the ideal congiguration: configure Mysql to be in UTC, but also from the perspective that: our application may be on someone else's server where they might have a different TZ (so i cant set the timezone at the mysql/server level). How do i configure it at the specific database level?

    Read the article

  • Can't get MySQL to install

    - by James Marthenal
    I'd like to think I know what I'm doing in a Unix shell but maybe not. I made a mistake in a configuration file for MySQL, so I decided to just uninstall it and then reinstall it, so I did: sudo apt-get --purge remove mysql-server mysql-server-5.0 mysql-client The files were deleted, so I then tried to install it, but it didn't ask me for a root password or anything else, so I uninstalled it using the above command again and then did sudo rm -rf /etc/mysql sudo rm /etc/init.d/mysql sudo rm -rf /var/lib/mysql* I then restarted the computer then installed it again: sudo apt-get install mysql-server mysql-client It asked for a root password, and everything looked like it would work, until I saw this: $ sudo apt-get install mysql-server mysql-client Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 Suggested packages: tinyca The following NEW packages will be installed: mysql-client mysql-server mysql-server-5.0 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.7MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-client mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.28101": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.28101 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 160284 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-client. Unpacking mysql-client (from .../mysql-client_5.0.51a-24+lenny5_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up mysql-client (5.0.51a-24+lenny5) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Now I can't seem to figure out what to do. I just want to get a clean MySQL installation at this point. I'm running the latest stable release of Debian. All help is appreciated—thanks! Edit: I looked at this similar question, which suggests that I uninstall mysql-common, but when I try to do so I see: The following packages will be REMOVED: apache2 apache2-mpm-prefork apache2-utils apache2.2-common git-svn libapache2-mod-php5 libapache2-mod-python libapache2-svn libaprutil1 libdbd-mysql-perl libdbd-mysql-rubygem libmysql-ruby libmysql-ruby1.8 libmysql-rubygem libmysqlclient15-dev libmysqlclient15off librdf-perl librdf0 libserf-0-0 libsvn-perl libsvn1 mysql-client-5.0 mysql-common mytop ndn-apache22-php5 ndn-apache22-svn ndn-interpreters ndn-lighttpd ndn-netsaint-plugins ndn-perl-modules ndn-php5-cgi ndn-php5-xcache ndn-php53 ndn-php53-suhosin ndn-rubygems php5 php5-mcrypt php5-mysql proftpd proftpd-mod-mysql python-django python-mysqldb python-subversion python-svn subversion subversion-tools trac zendoptimizer 0 upgraded, 0 newly installed, 48 to remove and 1 not upgraded. Eeek! Any suggestions?

    Read the article

  • Can't install MySQL

    - by James Marthenal
    I have a Debian machine that I have previously installed MySQL on. In an attempt to delete it, I stupidly deleted the directories/files /etc/mysql/, /etc/init.d/mysql, /usr/lib/mysql/, /var/lib/mysql/. I then later did sudo apt-get purge mysql-server mysql-server-5.0. Now, when I try to install mysql-server, I get: $ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 The following NEW packages will be installed: mysql-server mysql-server-5.0 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.6MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.122781": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.122781 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 158138 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) ERROR: 1017 Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] Aborting 110206 19:31:13 [Note] /usr/sbin/mysqld: Shutdown complete /etc/init.d/mysql: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz (warning). Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried to search for a solution via Google and have found lots of suggestions for this problem, but ultimately it seems like the problem is that by deleting the files manually, I messed up the mysql-common package. I have tried to do sudo apt-get install --reinstall mysql-common followed by installing mysql-server, but it does the exact same thing. I previously had MySQL working great, I just want to get it back to that state. Thanks so much for your help.

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • PHP install sqlite3 extension

    - by Kevin
    We are using PHP 5.3.6 here, but we used the --without-sqlite3 command when compiling PHP. (It stands in the 'Configure Command' column). But, it is very risky to recompile PHP on that server; there are many visitors. How can we install/use sqlite3? Regards, Kevin [EDIT] yum repolist gives: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.nl.leaseweb.net * extras: mirror.nl.leaseweb.net * updates: mirror.nl.leaseweb.net repo id repo name status base CentOS-5 - Base 3,566 extras CentOS-5 - Extras 237 updates CentOS-5 - Updates 376 repolist: 4,179 rpm -qa | grep php gives: php-pdo-5.3.6-1.w5 php-mysql-5.3.6-1.w5 psa-php5-configurator-1.5.3-cos5.build95101022.10 php-mbstring-5.3.6-1.w5 php-imap-5.3.6-1.w5 php-cli-5.3.6-1.w5 php-gd-5.3.6-1.w5 php-5.3.6-1.w5 php-common-5.3.6-1.w5 php-xml-5.3.6-1.w5 php -i | grep sqlite gives: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/sqlite3.so' - /usr/lib64/php/modules/sqlite3.so: cannot open shared object file: No such file or directory in Unknown on line 0 Configure Command => './configure' '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--cache-file=../config.cache' '--with-libdir=lib64' '--with-config-file-path=/etc' '--with-config-file-scan-dir=/etc/php.d' '--disable-debug' '--with-pic' '--disable-rpath' '--without-pear' '--with-bz2' '--with-exec-dir=/usr/bin' '--with-freetype-dir=/usr' '--with-png-dir=/usr' '--with-xpm-dir=/usr' '--enable-gd-native-ttf' '--without-gdbm' '--with-gettext' '--with-gmp' '--with-iconv' '--with-jpeg-dir=/usr' '--with-openssl' '--with-pcre-regex=/usr' '--with-zlib' '--with-layout=GNU' '--enable-exif' '--enable-ftp' '--enable-magic-quotes' '--enable-sockets' '--enable-sysvsem' '--enable-sysvshm' '--enable-sysvmsg' '--with-kerberos' '--enable-ucd-snmp-hack' '--enable-shmop' '--enable-calendar' '--without-mime-magic' '--without-sqlite' '--without-sqlite3' '--with-libxml-dir=/usr' '--enable-xml' '--with-system-tzdata' '--enable-force-cgi-redirect' '--enable-pcntl' '--with-imap=shared' '--with-imap-ssl' '--enable-mbstring=shared' '--enable-mbregex' '--with-gd=shared' '--enable-bcmath=shared' '--enable-dba=shared' '--with-db4=/usr' '--with-xmlrpc=shared' '--with-ldap=shared' '--with-ldap-sasl' '--with-mysql=shared,/usr' '--with-mysqli=shared,/usr/bin/mysql_config' '--enable-dom=shared' '--with-pgsql=shared' '--enable-wddx=shared' '--with-snmp=shared,/usr' '--enable-soap=shared' '--with-xsl=shared,/usr' '--enable-xmlreader=shared' '--enable-xmlwriter=shared' '--with-curl=shared,/usr' '--enable-fastcgi' '--enable-pdo=shared' '--with-pdo-odbc=shared,unixODBC,/usr' '--with-pdo-mysql=shared,/usr' '--with-pdo-pgsql=shared,/usr' '--with-pdo-sqlite=shared,/usr' '--with-pdo-dblib=shared,/usr' '--enable-json=shared' '--enable-zip=shared' '--with-readline' '--with-pspell=shared' '--enable-phar=shared' '--with-mcrypt=shared,/usr' '--with-tidy=shared,/usr' '--with-mssql=shared,/usr' '--enable-sysvmsg=shared' '--enable-sysvshm=shared' '--enable-sysvsem=shared' '--enable-posix=shared' '--with-unixODBC=shared,/usr' '--enable-fileinfo=shared' '--enable-intl=shared' '--with-icu-dir=/usr' '--with-recode=shared,/usr' /etc/php.d/pdo_sqlite.ini, /etc/php.d/sqlite3.ini, PHP Warning: Unknown: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Europe/Berlin' for 'CET/1.0/no DST' instead in Unknown on line 0 PDO drivers => mysql, sqlite pdo_sqlite PWD => /root/sqlite _SERVER["PWD"] => /root/sqlite _ENV["PWD"] => /root/sqlite

    Read the article

  • Cisco 678 Will Not Work using PPPoE - Possibly Because I Configured it Incorrectly..?

    - by Brian Stinar
    I am attempting to configure a Cisco 678 because I am totally sick on my Actiontec. However, I am running into some problems. It seems as though the Cisco is able to train the line, but I am unable to ping out. I am all right at programming, but still learning a lot when it comes to being a system administrator. I apologize in advance if I did something ridiculous, or am attempting to configure this device to do something it was not designed to do. It is almost like I am not correctly configuring the device to grab it's IP using PPPoA (like my Actiontec.) The output from "show running" (below) makes me think this too. Below are the commands I ran in order to configure this: # en # set nvram erase # write # reboot # en # set nat enable # set dhcp server enable # set PPP wan0-0 ipcp 0.0.0.0 # set ppp wan0-0 dns 0.0.0.0 # set PPP wan0-0 login xxxxx // My actual login # set PPP wan0-0 password yyyyy // My actual password # set PPP restart enabled # set int wan0-0 close # set int wan0-0 vpi 0 # set int wan0-0 vci 32 # set int wan0-0 open # write # reboot Here is the output from a few commands I thought could provide some useful information: cbos#ping 74.125.224.113 Sending 1 8 byte ping(s) to 74.125.224.113 every 2 second(s) Request timed out cbos#show version Cisco Broadband Operating System CBOS (tm) 678 Software (C678-I-M), Version v2.4.9 - Release Software Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Nov 17 2004 15:26:29 DMT FULL firmware version G96 NVRAM image at 0x1030f000 cbos#show errors - Current Error Messages - ## Ticks Module Level Message 0 000:00:00:00 PPP Info IPCP Open Event on wan0-0 1 000:00:00:14 ATM Info Wan0 Up 2 000:00:00:14 PPP Info PPP Up Event on wan0-0 3 000:00:01:54 PPP Info PPP Down Event on wan0-0 Total Number of Error Messages: 4 cbos#show interface wan0 wan0 ADSL Physical Port Line Trained Actual Configuration: Overhead Framing: 3 Trellis Coding: Enabled Standard Compliance: T1.413 Downstream Data Rate: 1184 Kbps Upstream Data Rate: 928 Kbps Interleave S Downstream: 4 Interleave D Downstream: 16 Interleave R Downstream: 16 Interleave S Upstream: 4 Interleave D Upstream: 8 Interleave R Upstream: 16 Modem Microcode: G96 DSP version: 0 Operating State: Showtime/Data Mode Configured: Echo Cancellation: Disabled Overhead Framing: 3 Coding Gain: Auto TX Power Attenuation: 0dB Trellis Coding: Enabled Bit Swapping: Disabled Standard Compliance: T1.413 Remote Standard Compliance: T1.413 Tx Start Bin: 0x6 Tx End Bin: 0x1f Data Interface: Utopia L1 Status: Local SNR Margin: 19.0dB Local Coding Gain: 7.5dB Local Transmit Power: 12.5dB Local Attenuation: 46.0dB Remote Attenuation: 31.0dB Local Counters: Interleaved RS Corrected Bytes: 0 Interleaved Symbols with CRC Errors: 2 No Cell Delineation Interleaved: 0 Out of Cell Delineation Interleaved: 0 Header Error Check Counter Interleaved: 0 Count of Severely Errored Frames: 0 Count of Loss of Signal Frames: 0 Remote Counters: Interleaved RS Corrected Bytes: 0 Interleaved Symbols with CRC Errors: 1 No Cell Delineation Interleaved: 0 Header Error Check Counter Interleaved: 0 Count of Severely Errored Frames: 0 Count of Loss of Signal Frames: 0 cbos#show int wan0-0 WAN0-0 ATM Logical Port PVC (VPI 0, VCI 32) is configured. ScalaRate set to Auto AAL 5 UBR Traffic IP Port Enabled cbos#show running Warning: traffic may pause while NVRAM is being accessed [[ CBOS = Section Start ]] NSOS MD5 Enable Password = XXXX NSOS MD5 Root Password = XXXX NSOS MD5 Commander Password = XXXX [[ PPP Device Driver = Section Start ]] PPP Port User Name = 00, "XXXX" PPP Port User Password = 00, XXXX PPP Port Option = 00, IPCP,IP Address,3,Auto,Negotiation Not Required,Negotiable ,IP,0.0.0.0 PPP Port Option = 00, IPCP,Primary DNS Server,129,Auto,Negotiation Not Required, Negotiable,IP,0.0.0.0 PPP Port Option = 00, IPCP,Secondary DNS Server,131,Auto,Negotiation Not Require d,Negotiable,IP,0.0.0.0 [[ ATM WAN Device Driver = Section Start ]] ATM WAN Virtual Connection Parms = 00, 0, 32, 0 [[ DHCP = Section Start ]] DHCP Server = enabled [[ IP Routing = Section Start ]] IP NAT = enabled [[ WEB = Section Start ]] WEB = enabled cbos# wtf...? Thank you all very much for taking the time to read this, and the help.

    Read the article

  • how to install 'version.h' in ubuntu ?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual of how to Install version.h : Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: # cd /usr/src/linux Type: # make xconfig Save the configuration by choosing Save and Exit. Type: # make dep Exit super user mode: # exit But the shell says: warning: make dep is unnecessary now. Then, I found out there is a version.h in /usr/src/linux-headers-3.11.0.12-generic, so I type: /usr/src/windriver/redist# ./configure --with-kernel-source=/usr/src/linux-headers-3.11.0.12-generic But, the windriver run fails: USE_KBUILD = yes checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6_usb.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.usb: creating ./config.status config.status: creating makefile.usb.kbuild checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for right linked object... windrvr_gcc_v3.a checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.wd: creating ./config.status config.status: creating makefile.wd.kbuild What is the problem?

    Read the article

  • ModSecurity compile error on nginx

    - by user146481
    I'm trying to install ModSecurity on nginx with the following instructions : wget https://github.com/SpiderLabs/ModSecurity/archive/master.zip unzip master cd ModSecurity-master ./autogen.sh ./configure --enable-standalone-module And i got the following error : Checking plataform... Identified as Linux configure: looking for Apache module support via DSO through APXS configure: error: couldn't find APXS After installing httpd-devel httpd-devel and running ./configure --enable-standalone-module --with-apxs=/usr/sbin/apxs ; make modsecurity compile workes but still have another error of nginx compilation : ./configure --add-module=/usr/local/src/john/ModSecurity-master/nginx/modsecurity and i got this error : gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/apache2 -I /usr/include/apr-1.0 -I /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone -I /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2 -I /usr/include/libxml2 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/modsecurity/ngx_http_modsecurity.o \ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:20:23: error: http_core.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:21:26: error: http_request.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:37, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_logging.h:41:23: error: apr_pools.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:38, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:26:25: error: apr_general.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:27:24: error: apr_tables.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:38, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:44: error: expected specifier-qualifier-list before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:65: error: expected specifier-qualifier-list before ‘apr_array_header_t’ cc1: warnings being treated as errors /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: expected ‘,’ or ‘;’ before ‘multipart_cleanup’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:137: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:39, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_pcre.h:41: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_pcre.h:45: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:19:27: error: apr_file_info.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:41, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: type defaults to ‘int’ in declaration of ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: expected ‘,’ or ‘;’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:24: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:20:19: error: httpd.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:21:24: error: ap_release.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:24:26: error: apr_optional.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: expected declaration specifiers or ‘...’ before ‘modsec_register_tfn’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: expected declaration specifiers or ‘...’ before ‘modsec_register_operator’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:32: error: expected declaration specifiers or ‘...’ before ‘modsec_register_variable’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:33: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:32: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:36: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: expected declaration specifiers or ‘...’ before ‘modsec_register_reqbody_processor’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:56: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:58: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: expected ‘,’ or ‘;’ before ‘input_filter’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: expected ‘,’ or ‘;’ before ‘output_filter’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: expected ‘,’ or ‘;’ before ‘read_request_body’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: expected ‘,’ or ‘;’ before ‘send_error_bucket’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:83: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:85: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:93: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:95: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:43:25: error: http_config.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:59: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: expected ‘,’ or ‘;’ before ‘collection_original_setvar’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:63: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:67: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:70: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:75: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:76: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:86: error: expected specifier-qualifier-list before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:94: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:101: error: expected specifier-qualifier-list before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: expected ‘,’ or ‘;’ before ‘msre_ruleset_process_phase’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: expected ‘,’ or ‘;’ before ‘msre_ruleset_process_phase_internal’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:115: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:143: error: expected specifier-qualifier-list before ‘apr_ipsubnet_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:149: error: expected specifier-qualifier-list before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:189: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:219: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:235: error: expected specifier-qualifier-list before ‘fn_tfn_execute_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:239: error: expected declaration specifiers or ‘...’ before ‘fn_tfn_execute_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:258: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:258: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:285: error: expected specifier-qualifier-list before ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: expected declaration specifiers or ‘...’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: ‘apr_status_t’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: ‘apr_status_t’ redeclared as different kind of symbol /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: note: previous declaration of ‘apr_status_t’ was here /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:342: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:342: error: ‘fn_action_execute_t’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:369: error: expected specifier-qualifier-list before ‘fn_action_init_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:399: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:403: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:403: error: ‘msre_parse_vars’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:415: error: expected specifier-qualifier-list before ‘apr_size_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:54: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:62: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:66: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:68: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:70: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:74: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:76: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:82: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:88: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:90: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:92: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:100: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:102: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:104: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:106: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:108: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:110: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:112: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:114: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:128: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:132: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:136: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: type defaults to ‘int’ in declaration of ‘apr_fileperms_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: expected ‘,’ or ‘;’ before ‘mode2fileperms’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:144: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:41, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_xml.h:43: error: ‘xml_cleanup’ declared as function returning a function In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_geo.h:38:25: error: apr_file_io.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_geo.h:58: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:43, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_gsb.h:22:22: error: apr_hash.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:43, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_gsb.h:25: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:44, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_unicode.h:25: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:46, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_crypt.h:34: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:48:23: error: ap_config.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:49:21: error: apr_md5.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:50:25: error: apr_strings.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:54:22: error: http_log.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:938: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:938: error: too many arguments to function ‘ConvertNgxStringToUTF8’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:942: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:944: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:952: error: ‘modsecurity_read_body_cb’ undeclared (first use in this function) make[1]: *** [objs/addon/modsecurity/ngx_http_modsecurity.o] Error 1 make[1]: Leaving directory `/usr/local/src/john/nginx-1.2.5' make: *** [build] Error 2 Note : I'm using nginx as the only webserver and i do not have apache installed. OS : Centos 6 64bit How can i solve this problem And do you have another easy way to install modsecurity with nginx ?

    Read the article

  • Using JCA Adapter with OSB 11.1.1.3

    - by James Taylor
    In OSB 10g to use the JCA adapters you were required to use JDeveloper to create the necessary WSDLs and XSDs etc using the associated adapter wizard. These files were imported into Oracle Workshop (Eclipse) and used to create the business service as you would any other web service. In 11g unfortunately JDeveloper is still required. The process has changed slightly as described below. As an example I have used the JCA DB adapter as an example. Start JDeveloper 11.1.1.3 Create a new SOA Application Create a new SOA Project and call it DBAdapters. Choose the Empty Composite Template Drag a Database Adapter Component to the External References panel on the composite. Provide a service name. Create a new database connection, or use an existing one Take note of the JNDI Name, e.g. eis/DB/MyConnection This will be used to configure the DB connection in the WebLogic Console. In my example I use a stored procedure, but you can use what ever operation you require. Please refer to the following link for other options: User's Guide for Technology Adapters Select a schema and stored procedure Once the procedure has been selected, accept the defaults and finish. Startup your OEPE version of Eclipse. Create a new Oracle Service Bus Configuration Project (you can use an existing project if you have one) Create a new Oracle Service Bus Project in the configuration project created above. Instead of importing the WSDL and XSD files you import the jca file created in JDeveloper. In Eclipse right click the Oracle Service Bus Project and select Import –> Import    Choose File System Browse to the directory where JDeveloper stores its project Select the jca, wsdl, and xsd files based on the service you created in step 5. Also check the ‘Create selected folders only’ radio button. When you import you may have a little red x indicating the files are invalid. This is due to the location of the files. Open the invalid files and fix the path in relation to where you store your files in the OSB project.   Once you have the files all valid, Right-Click the jca file and select Oracle Service Bus –> Generate Service. This will create a new Business Service. In the WebLogic Console configure the JNDI name defined in step 7. You can now deploy your project and test

    Read the article

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Adding DTrace Probes to PHP Extensions

    - by cj
    The powerful DTrace tracing facility has some PHP-specific probes that can be enabled with --enable-dtrace. DTrace for Linux is being created by Oracle and is currently in tech preview. Currently it doesn't support userspace tracing so, in the meantime, Systemtap can be used to monitor the probes implemented in PHP. This was recently outlined in David Soria Parra's post Probing PHP with Systemtap on Linux. My post shows how DTrace probes can be added to PHP extensions and traced on Linux. I was using Oracle Linux 6.3. Not all Linux kernels are built with Systemtap, since this can impact stability. Check whether your running kernel (or others installed) have Systemtap enabled, and reboot with such a kernel: # grep CONFIG_UTRACE /boot/config-`uname -r` # grep CONFIG_UTRACE /boot/config-* When you install Systemtap itself, the package systemtap-sdt-devel is needed since it provides the sdt.h header file: # yum install systemtap-sdt-devel You can now install and build PHP as shown in David's article. Basically the build is with: $ cd ~/php-src $ ./configure --disable-all --enable-dtrace $ make (For me, running 'make' a second time failed with an error. The workaround is to do 'git checkout Zend/zend_dtrace.d' and then rerun 'make'. See PHP Bug 63704) David's article shows how to trace the probes already implemented in PHP. You can also use Systemtap to trace things like userspace PHP function calls. For example, create test.php: <?php $c = oci_connect('hr', 'welcome', 'localhost/orcl'); $s = oci_parse($c, "select dbms_xmlgen.getxml('select * from dual') xml from dual"); $r = oci_execute($s); $row = oci_fetch_array($s, OCI_NUM); $x = $row[0]->load(); $row[0]->free(); echo $x; ?> The normal output of this file is the XML form of Oracle's DUAL table: $ ./sapi/cli/php ~/test.php <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> To trace the PHP function calls, create the tracing file functrace.stp: probe process("sapi/cli/php").function("zif_*") { printf("Started function %s\n", probefunc()); } probe process("sapi/cli/php").function("zif_*").return { printf("Ended function %s\n", probefunc()); } This makes use of the way PHP userspace functions (not builtins) like oci_connect() map to C functions with a "zif_" prefix. Login as root, and run System tap on the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/functrace.stp Started function zif_oci_connect Ended function zif_oci_connect Started function zif_oci_parse Ended function zif_oci_parse Started function zif_oci_execute Ended function zif_oci_execute Started function zif_oci_fetch_array Ended function zif_oci_fetch_array Started function zif_oci_lob_load <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Ended function zif_oci_lob_load Started function zif_oci_free_descriptor Ended function zif_oci_free_descriptor Each call and return is logged. The Systemtap scripting language allows complex scripts to be built. There are many examples on the web. To augment this generic capability and the PHP probes in PHP, other extensions can have probes too. Below are the steps I used to add probes to OCI8: I created a provider file ext/oci8/oci8_dtrace.d, enabling three probes. The first one will accept a parameter that runtime tracing can later display: provider php { probe oci8__connect(char *username); probe oci8__nls_start(); probe oci8__nls_done(); }; I updated ext/oci8/config.m4 with the PHP_INIT_DTRACE macro. The patch is at the end of config.m4. The macro takes the provider prototype file, a name of the header file that 'dtrace' will generate, and a list of sources files with probes. When --enable-dtrace is used during PHP configuration, then the outer $PHP_DTRACE check is true and my new probes will be enabled. I've chosen to define an OCI8 specific macro, HAVE_OCI8_DTRACE, which can be used in the OCI8 source code: diff --git a/ext/oci8/config.m4 b/ext/oci8/config.m4 index 34ae76c..f3e583d 100644 --- a/ext/oci8/config.m4 +++ b/ext/oci8/config.m4 @@ -341,4 +341,17 @@ if test "$PHP_OCI8" != "no"; then PHP_SUBST_OLD(OCI8_ORACLE_VERSION) fi + + if test "$PHP_DTRACE" = "yes"; then + AC_CHECK_HEADERS([sys/sdt.h], [ + PHP_INIT_DTRACE([ext/oci8/oci8_dtrace.d], + [ext/oci8/oci8_dtrace_gen.h],[ext/oci8/oci8.c]) + AC_DEFINE(HAVE_OCI8_DTRACE,1, + [Whether to enable DTrace support for OCI8 ]) + ], [ + AC_MSG_ERROR( + [Cannot find sys/sdt.h which is required for DTrace support]) + ]) + fi + fi In ext/oci8/oci8.c, I added the probes at, for this example, semi-arbitrary places: diff --git a/ext/oci8/oci8.c b/ext/oci8/oci8.c index e2241cf..ffa0168 100644 --- a/ext/oci8/oci8.c +++ b/ext/oci8/oci8.c @@ -1811,6 +1811,12 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char } } +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_CONNECT_ENABLED()) { + DTRACE_OCI8_CONNECT(username); + } +#endif + /* Initialize global handles if they weren't initialized before */ if (OCI_G(env) == NULL) { php_oci_init_global_handles(TSRMLS_C); @@ -1870,11 +1876,22 @@ php_oci_connection *php_oci_do_connect_ex(char *username, int username_len, char size_t rsize = 0; sword result; +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_START_ENABLED()) { + DTRACE_OCI8_NLS_START(); + } +#endif PHP_OCI_CALL_RETURN(result, OCINlsEnvironmentVariableGet, (&charsetid_nls_lang, 0, OCI_NLS_CHARSET_ID, 0, &rsize)); if (result != OCI_SUCCESS) { charsetid_nls_lang = 0; } smart_str_append_unsigned_ex(&hashed_details, charsetid_nls_lang, 0); + +#ifdef HAVE_OCI8_DTRACE + if (DTRACE_OCI8_NLS_DONE_ENABLED()) { + DTRACE_OCI8_NLS_DONE(); + } +#endif } timestamp = time(NULL); The oci_connect(), oci_pconnect() and oci_new_connect() calls all use php_oci_do_connect_ex() internally. The first probe simply records that the PHP application made a connection call. I already showed a way to do this without needing a probe, but adding a specific probe lets me record the username. The other two probes can be used to time how long the globalization initialization takes. The relationships between the oci8_dtrace.d names like oci8__connect, the probe guards like DTRACE_OCI8_CONNECT_ENABLED() and probe names like DTRACE_OCI8_CONNECT() are obvious after seeing the pattern of all three probes. I included the new header that will be automatically created by the dtrace tool when PHP is built. I did this in ext/oci8/php_oci8_int.h: diff --git a/ext/oci8/php_oci8_int.h b/ext/oci8/php_oci8_int.h index b0d6516..c81fc5a 100644 --- a/ext/oci8/php_oci8_int.h +++ b/ext/oci8/php_oci8_int.h @@ -44,6 +44,10 @@ # endif # endif /* osf alpha */ +#ifdef HAVE_OCI8_DTRACE +#include "oci8_dtrace_gen.h" +#endif + #if defined(min) #undef min #endif Now PHP can be rebuilt: $ cd ~/php-src $ rm configure && ./buildconf --force $ ./configure --disable-all --enable-dtrace \ --with-oci8=instantclient,/home/cjones/instantclient $ make If 'make' fails, do the 'git checkout Zend/zend_dtrace.d' trick I mentioned. The new probes can be seen by logging in as root and running: # stap -l 'process.provider("php").mark("oci8*")' -c 'sapi/cli/php -i' process("sapi/cli/php").provider("php").mark("oci8__connect") process("sapi/cli/php").provider("php").mark("oci8__nls_done") process("sapi/cli/php").provider("php").mark("oci8__nls_start") To test them out, create a new trace file, oci.stp: global numconnects; global start; global numcharlookups = 0; global tottime = 0; probe process.provider("php").mark("oci8-connect") { printf("Connected as %s\n", user_string($arg1)); numconnects += 1; } probe process.provider("php").mark("oci8-nls_start") { start = gettimeofday_us(); numcharlookups++; } probe process.provider("php").mark("oci8-nls_done") { tottime += gettimeofday_us() - start; } probe end { printf("Connects: %d, Charset lookups: %ld\n", numconnects, numcharlookups); printf("Total NLS charset initalization time: %ld usecs/connect\n", (numcharlookups 0 ? tottime/numcharlookups : 0)); } This calculates the average time that the NLS character set lookup takes. It also prints out the username of each connection, as an example of using parameters. Login as root and run Systemtap over the PHP script: # cd ~cjones/php-src # stap -c 'sapi/cli/php ~cjones/test.php' ~cjones/oci.stp Connected as cj <?xml version="1.0"?> <ROWSET> <ROW> <DUMMY>X</DUMMY> </ROW> </ROWSET> Connects: 1, Charset lookups: 1 Total NLS charset initalization time: 164 usecs/connect This shows the time penalty of making OCI8 look up the default character set. This time would be zero if a character set had been passed as the fourth argument to oci_connect() in test.php.

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Stream Music and Video Over the Internet with Windows Media Player 12

    - by DigitalGeekery
    A new feature in Windows Media Player 12, which is included with Windows 7, is being able to stream media over the web to other Windows 7 computers.  Today we will take a look at how to set it up and what you need to begin. Note: You will need to perform this process on each computer that you want to use. What You’ll Need Two computers running Windows 7 Home Premium, Professional, or Ultimate. The host, or home computer that you will be streaming the media from, cannot be on a public network or part of domain. Windows Live ID UPnP or Port Forwarding enabled on your home router Media files added to your Windows Media Player library Windows Live ID Sign up online for a Windows Live ID if you do not already have one. See the link below for a link to Windows Live.   Configuring the Windows 7 Computers Open Windows Media Player and go to the library section. Click on Stream and then “Allow Internet access to home media.”   The Internet Home Media Access pop up window will prompt you to link your Windows Live ID to a user account. Click “Link an online ID.” If you haven’t already installed the Windows Live ID Sign-In Assistant, you will be taken to Microsoft’s website and prompted to download it. Once you have completed the Windows Live download assistant install, you will see Windows Live ID online provider appear in the “Link Online IDs” window. Click on “Link Online ID.” Next, you’ll be prompted for a Windows Live ID and password. Enter your Windows Live ID and password and click “Sign In.” A pop up window will notify you that you have successfully allowed Internet access to home media. Now, you will have to repeat the exact same configuration on the 2nd Windows 7 computer. Once you have completed the same configuration on your 2nd computer, you might also need to configure your home router for port forwarding. If your router supports UPnP, you may not need to manually forward any ports on your router. So, this would be a good time to test your connection. Go to a nearby hotspot, or perhaps a neighbor’s house, and test to see if you can stream your media. If not, you’ll need to manually forward the ports. You can always choose to forward the ports anyway, just in case. Note: We tested on a Linksys WRT54GL router, which supports UPnP, and found we still needed to manually forward the ports. Finding the ports to forward on the router Open Windows Media Player and make sure you are in Library view. Click on “Stream” on the top menu, and select “Allow Internet access to home media.”   On the “Internet Home Media Access” window, click on “Diagnose connections.” The “Internet Streaming Diagnostic Tool” will pop up. Click on “Port forwarding information” near the bottom.   On the “Port Forwarding Information” window you will find both the Internal and External Port numbers you will need to forward on your router. The Internal port number should always be 10245. The external number will be different depending on your computer. Microsoft also recommends forwarding port 443. Configuring the Router Next, you’ll need to configure Port Forwarding on your home router. We will show you the steps for a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Administration Tab along the top, click the “Applications & Gaming Tab, and then the “Port Range Forward” tab below it. Under “Application,” type in a name. It can be any name you choose. In both the “Start” and “End” boxes, type the port number. Enter the IP address of your home computer in the IP address column. Click the check box under “Enable.” Do this for both the internal and external port numbers and port 443. When finished, click the “Save Settings” button. Note: It’s highly recommended that you configure your home computer with a static IP address When you’re ready to play your media over the Internet, open up Windows Media Player and look for your host computer and username listed under “Other Libraries.” Click on it expand the list to see your media libraries. Choose a library and a file to play. Now you can enjoy your streaming media over the Internet. Conclusion We found media streaming over the Internet to work fairly well. However, we did see a loss of quality with streaming video. Also, Recorded TV .wtv and dvr-ms files did not play at all. Check out our previous article to see how to stream media share and stream media between Windows 7 computers on your home network. Similar Articles Productive Geek Tips Enable Media Streaming in Windows Home Server to Windows Media PlayerFixing When Windows Media Player Library Won’t Let You Add FilesShare Digital Media With Other Computers on a Home Network with Windows 7Share and Stream Digital Media Between Windows 7 Machines On Your Home NetworkLearning Windows 7: Manage Your Music with Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox)

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • E: mkinitramfs failure cpio 141 gzip 1

    - by Nagaraj Shindagi
    I'm using Ubuntu 12.04 LTS with Dell power-edge R720 server, facing the problem when I apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-image-3.2.0-37-generic-pae (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic-pae Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic-pae /boot/vmlinuz-3.2.0-37-generic-pae update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic-pae gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic-pae with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0 -37-generic-pae.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-37-generic-pae; however: Package linux-image-3.2.0-37-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup erro r from a previous failure. Errors were encountered while processing: linux-image-3.2.0-37-generic-pae linux-image-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1) ------------ even i tried with apt-get clean apt-get remove apt-get autoremove apt-get purge there is no difference it will show the same error message as above, even i checked the disk space ----------- Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 24030076 612456 22196964 3% / udev 16536644 4 16536640 1% /dev tmpfs 6618884 1164 6617720 1% /run none 5120 0 5120 0% /run/lock none 16547208 72 16547136 1% /run/shm cgroup 16547208 0 16547208 0% /sys/fs/cgroup /dev/sda1 93207 75034 13361 85% /boot /dev/sda10 9611492 1096076 8027176 13% /tmp /dev/sda12 9611492 226340 8896912 3% /opt /dev/sda13 9611492 152516 8970736 2% /srv /dev/sda7 9611492 592208 8531044 7% /home /dev/sda8 9611492 2656736 6466516 30% /usr /dev/sda9 9611492 696468 8426784 8% /var /dev/sda14 961237336 134563516 777845764 15% /usr/data /dev/sda15 618991384 84498388 503050052 15% /usr/data1 /dev/sda11 9611492 152616 8970636 2% /usr/local --------------- is there any problem on allotting the space to the partiations please let me know the solution its on urgent please help me on this issue regards

    Read the article

  • What's up with LDoms: Part 4 - Virtual Networking Explained

    - by Stefan Hinker
    I'm back from my summer break (and some pressing business that kept me away from this), ready to continue with Oracle VM Server for SPARC ;-) In this article, we'll have a closer look at virtual networking.  Basic connectivity as we've seen it in the first, simple example, is easy enough.  But there are numerous options for the virtual switches and virtual network ports, which we will discuss in more detail now.   In this section, we will concentrate on virtual networking - the capabilities of virtual switches and virtual network ports - only.  Other options involving hardware assignment or redundancy will be covered in separate sections later on. There are two basic components involved in virtual networking for LDoms: Virtual switches and virtual network devices.  The virtual switch should be seen just like a real ethernet switch.  It "runs" in the service domain and moves ethernet packets back and forth.  A virtual network device is plumbed in the guest domain.  It corresponds to a physical network device in the real world.  There, you'd be plugging a cable into the network port, and plug the other end of that cable into a switch.  In the virtual world, you do the same:  You create a virtual network device for your guest and connect it to a virtual switch in a service domain.  The result works just like in the physical world, the network device sends and receives ethernet packets, and the switch does all those things ethernet switches tend to do. If you look at the reference manual of Oracle VM Server for SPARC, there are numerous options for virtual switches and network devices.  Don't be confused, it's rather straight forward, really.  Let's start with the simple case, and work our way to some more sophisticated options later on.  In many cases, you'll want to have several guests that communicate with the outside world on the same ethernet segment.  In the real world, you'd connect each of these systems to the same ethernet switch.  So, let's do the same thing in the virtual world: root@sun # ldm add-vsw net-dev=nxge2 admin-vsw primary root@sun # ldm add-vnet admin-net admin-vsw mars root@sun # ldm add-vnet admin-net admin-vsw venus We've just created a virtual switch called "admin-vsw" and connected it to the physical device nxge2.  In the physical world, we'd have powered up our ethernet switch and installed a cable between it and our big enterprise datacenter switch.  We then created a virtual network interface for each one of the two guest systems "mars" and "venus" and connected both to that virtual switch.  They can now communicate with each other and with any system reachable via nxge2.  If primary were running Solaris 10, communication with the guests would not be possible.  This is different with Solaris 11, please see the Admin Guide for details.  Note that I've given both the vswitch and the vnet devices some sensible names, something I always recommend. Unless told otherwise, the LDoms Manager software will automatically assign MAC addresses to all network elements that need one.  It will also make sure that these MAC addresses are unique and reuse MAC addresses to play nice with all those friendly DHCP servers out there.  However, if we want to do this manually, we can also do that.  (One reason might be firewall rules that work on MAC addresses.)  So let's give mars a manually assigned MAC address: root@sun # ldm set-vnet mac-addr=0:14:4f:f9:c4:13 admin-net mars Within the guest, these virtual network devices have their own device driver.  In Solaris 10, they'd appear as "vnet0".  Solaris 11 would apply it's usual vanity naming scheme.  We can configure these interfaces just like any normal interface, give it an IP-address and configure sophisticated routing rules, just like on bare metal.  In many cases, using Jumbo Frames helps increase throughput performance.  By default, these interfaces will run with the standard ethernet MTU of 1500 bytes.  To change this,  it is usually sufficient to set the desired MTU for the virtual switch.  This will automatically set the same MTU for all vnet devices attached to that switch.  Let's change the MTU size of our admin-vsw from the example above: root@sun # ldm set-vsw mtu=9000 admin-vsw primary Note that that you can set the MTU to any value between 1500 and 16000.  Of course, whatever you set needs to be supported by the physical network, too. Another very common area of network configuration is VLAN tagging. This can be a little confusing - my advise here is to be very clear on what you want, and perhaps draw a little diagram the first few times.  As always, keeping a configuration simple will help avoid errors of all kind.  Nevertheless, VLAN tagging is very usefull to consolidate different networks onto one physical cable.  And as such, this concept needs to be carried over into the virtual world.  Enough of the introduction, here's a little diagram to help in explaining how VLANs work in LDoms: Let's remember that any VLANs not explicitly tagged have the default VLAN ID of 1. In this example, we have a vswitch connected to a physical network that carries untagged traffic (VLAN ID 1) as well as VLANs 11, 22, 33 and 44.  There might also be other VLANs on the wire, but the vswitch will ignore all those packets.  We also have two vnet devices, one for mars and one for venus.  Venus will see traffic from VLANs 33 and 44 only.  For VLAN 44, venus will need to configure a tagged interface "vnet44000".  For VLAN 33, the vswitch will untag all incoming traffic for venus, so that venus will see this as "normal" or untagged ethernet traffic.  This is very useful to simplify guest configuration and also allows venus to perform Jumpstart or AI installations over this network even if the Jumpstart or AI server is connected via VLAN 33.  Mars, on the other hand, has full access to untagged traffic from the outside world, and also to VLANs 11,22 and 33, but not 44.  On the command line, we'd do this like this: root@sun # ldm add-vsw net-dev=nxge2 pvid=1 vid=11,22,33,44 admin-vsw primary root@sun # ldm add-vnet admin-net pvid=1 vid=11,22,33 admin-vsw mars root@sun # ldm add-vnet admin-net pvid=33 vid=44 admin-vsw venus Finally, I'd like to point to a neat little option that will make your live easier in all those cases where configurations tend to change over the live of a guest system.  It's the "id=<somenumber>" option available for both vswitches and vnet devices.  Normally, Solaris in the guest would enumerate network devices sequentially.  However, it has ways of remembering this initial numbering.  This is good in the physical world.  In the virtual world, whenever you unbind (aka power off and disassemble) a guest system, remove and/or add network devices and bind the system again, chances are this numbering will change.  Configuration confusion will follow suit.  To avoid this, nail down the initial numbering by assigning each vnet device it's device-id explicitly: root@sun # ldm add-vnet admin-net id=1 admin-vsw venus Please consult the Admin Guide for details on this, and how to decipher these network ids from Solaris running in the guest. Thanks for reading this far.  Links for further reading are essentially only the Admin Guide and Reference Manual and can be found above.  I hope this is useful and, as always, I welcome any comments.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >