Search Results

Search found 26115 results on 1045 pages for 'table alias'.

Page 215/1045 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Can emacs generate a table of comments and number sections of a document?

    - by mp3foley
    I'm writing a plain text document with numbered sections or chapters and am wondering if emacs can help with numbering and re-numbering sections. And of course would be great if it could then generate a table of contents as well. I have had a search on google and looked through the emacs wiki but did not come up with anything other than for latex stuff and possibly muse mode, but I would like to keep this as a plain text README style document. Thanks for any help or suggestions.

    Read the article

  • How should I structure my settings table with mysql?

    - by incrediman
    I want to use MySQL to store a bunch of admin settings - what's the best way to structure the table? Like this? Setting _|_ Value setting1 | a setting2 | b setting3 | c setting4 | d setting5 | e Or like this? |--------|_setting1_|_setting2_|_setting3_|_setting4_|_setting5_| Settings | a | b | c | d | e | Or maybe some other way?

    Read the article

  • Using Unity – Part 1

    - by nmarun
    I have been going through implementing some IoC pattern using Unity and so I decided to share my learnings (I know that’s not an English word, but you get the point). Ok, so I have an ASP.net project named ProductWeb and a class library called ProductModel. In the model library, I have a class called Product: 1: public class Product 2: { 3: public string Name { get; set; } 4: public string Description { get; set; } 5:  6: public Product() 7: { 8: Name = "iPad"; 9: Description = "Not just a reader!"; 10: } 11:  12: public string WriteProductDetails() 13: { 14: return string.Format("Name: {0} Description: {1}", Name, Description); 15: } 16: } In the Page_Load event of the default.aspx, I’ll need something like: 1: Product product = new Product(); 2: productDetailsLabel.Text = product.WriteProductDetails(); Now, let’s go ‘Unity’fy this application. I assume you have all the bits for the pattern. If not, get it from here. I found this schematic representation of Unity pattern from the above link. This image might not make much sense to you now, but as we proceed, things will get better. The first step to implement the Inversion of Control pattern is to create interfaces that your types will implement. An IProduct interface is added to the ProductModel project. 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } Let’s make our Product class to implement the IProduct interface. The application will compile and run as before despite the changes made. Add the following references to your web project: Microsoft.Practices.Unity Microsoft.Practices.Unity.Configuration Microsoft.Practices.Unity.StaticFactory Microsoft.Practices.ObjectBuilder2 We need to add a few lines to the web.config file. The line below tells what version of Unity pattern we’ll be using. 1: <configSections> 2: <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> 3: </configSections> Add another block with the same name as the section name declared above – ‘unity’. 1: <unity> 2: <typeAliases> 3: <!--Custom object types--> 4: <typeAlias alias="IProduct" type="ProductModel.IProduct, ProductModel"/> 5: <typeAlias alias="Product" type="ProductModel.Product, ProductModel"/> 6: </typeAliases> 7: <containers> 8: <container name="unityContainer"> 9: <types> 10: <type type="IProduct" mapTo="Product"/> 11: </types> 12: </container> 13: </containers> 14: </unity> From the Unity Configuration schematic shown above, you see that the ‘unity’ block has a ‘typeAliases’ and a ‘containers’ segment. The typeAlias element gives a ‘short-name’ for a type. This ‘short-name’ can be used to point to this type any where in the configuration file (web.config in our case, but all this information could be coming from an external xml file as well). The container element holds all the mapping information. This container is referenced through its name attribute in the code and you can have multiple of these container elements in the containers segment. The ‘type’ element in line 10 basically says: ‘When Unity requests to resolve the alias IProduct, return an instance of whatever the short-name of Product points to’. This is the most basic piece of Unity pattern and all of this is accomplished purely through configuration. So, in future you have a change in your model, all you need to do is - implement IProduct on the new model class and - either add a typeAlias for the new type and point the mapTo attribute to the new alias declared - or modify the mapTo attribute of the type element to point to the new alias (as the case may be). Now for the calling code. It’s a good idea to store your unity container details in the Application cache, as this is rarely bound to change and also adds for better performance. The Global.asax.cs file comes for our rescue: 1: protected void Application_Start(object sender, EventArgs e) 2: { 3: // create and populate a new Unity container from configuration 4: IUnityContainer unityContainer = new UnityContainer(); 5: UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); 6: section.Containers["unityContainer"].Configure(unityContainer); 7: Application["UnityContainer"] = unityContainer; 8: } 9:  10: protected void Application_End(object sender, EventArgs e) 11: { 12: Application["UnityContainer"] = null; 13: } All this says is: create an instance of UnityContainer() and read the ‘unity’ section from the configSections segment of the web.config file. Then get the container named ‘unityContainer’ and store it in the Application cache. In my code-behind file, I’ll make use of this UnityContainer to create an instance of the Product type. 1: public partial class _Default : Page 2: { 3: private IUnityContainer unityContainer; 4: protected void Page_Load(object sender, EventArgs e) 5: { 6: unityContainer = Application["UnityContainer"] as IUnityContainer; 7: if (unityContainer == null) 8: { 9: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 10: } 11: else 12: { 13: IProduct productInstance = unityContainer.Resolve<IProduct>(); 14: productDetailsLabel.Text = productInstance.WriteProductDetails(); 15: } 16: } 17: } Looking the ‘else’ block, I’m asking the unityContainer object to resolve the IProduct type. All this does, is to look at the matching type in the container, read its mapTo attribute value, get the full name from the alias and create an instance of the Product class. Fabulous!! I’ll go more in detail in the next blog. The code for this blog can be found here.

    Read the article

  • SQL SERVER – Enable Identity Insert – Import Expert Wizard

    - by pinaldave
    I recently got email from old friend who told me that when he tries to execute SSIS package it fails with some identity error. After some debugging and opening his package we figure out that he has following issue. Let us see what kind of set up he had on his package. Source Table with Identity column Destination Table with Identity column Following checkbox was disabled in Import Expert Wizard (as per the image below) What did we do is we enabled the checkbox described as above and we fixed the problem he was having due to insertion in identity column. The reason he was facing this error because his destination table had IDENTITY property which will not allow any  insert from user. This value is automatically generated by system when new values are inserted in the table. However, when user manually tries to insert value in the table, it stops them and throws an error. As we enabled the checkbox “Enable Identity Insert”, this feature allowed the values to be insert in the identity field and this way from source database exact identity values were moved to destination table. Let me know if this blog post was easy to understand. Reference: Pinal Dave (http://blog.SQLAuthority.com), Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video

    - by pinaldave
    Importing data into database is one of the most important tasks. I often receive questions regarding what is the quickest way to insert CSV data or how to import CSV Data into SQL Server Table. Honestly the process is very simple and the script is even simpler. In today’s SQL in Sixty Seconds Video we will learn how quickly we can insert CSV data into SQL Server. The steps to import CSV are very simple. Create Table Use Bulk Insert to import the data Verify the data Done! Absolutely it is that simple. More on Importing CSV Data: SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Create a Comma Delimited List Using SELECT Clause From Table Column SQL SERVER – Comma Separated Values (CSV) from Table Column SQL SERVER – Comma Separated Values (CSV) from Table Column – Part 2 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Reading and conditionally updating N rows, where N > 100,000 for DNA Sequence processing

    - by makerofthings7
    I have a proof of concept application that uses Azure tables to associate DNA sequences to "something". Table 1 is the master table. It uniquely lists every DNA sequence. The PK is a load balanced hash of the RK. The RK is the unique encoded value of the DNA sequence. Additional tables are created per subject. Each subject has a list of N DNA sequences that have one reference in the Master table, where N is 100,000. It is possible for many tables to reference the same DNA sequence, but in this case only one entry will be present in the Master table. My Azure dilemma: I need to lock the reference in the Master table as I work with the data. I need to handle timeouts, and prevent other threads from overwriting my data as one C# thread is working with the information. Other threads need to realise that this is locked, and move onto other unlocked records and do the work. Ideally I'd like to get some progress report of how my computation is going, and have the option to cancel the process (and unwind the locks). Question What is the best approach for this? I'm looking at these code snippets for inspiration: http://blogs.msdn.com/b/jimoneil/archive/2010/10/05/azure-home-part-7-asynchronous-table-storage-pagination.aspx http://stackoverflow.com/q/4535740/328397

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Apache2 return 404 for proxy requests before reaching WSGI

    - by Alejandro Mezcua
    I have a Django app running under Apache2 and mod_wsgi and, unfortunately, lots of requests trying to use the server as a proxy. The server is responding OK with 404 errors but the errors are generated by the Django (WSGI) app, which causes a high CPU usage. If I turn off the app and let Apache handle the response directly (send a 404), the CPU usage drops to almost 0 (mod_proxy is not enabled). Is there a way to configure Apache to respond directly to this kind of requests with an error before the request hits the WSGI app? I have seen that maybe mod_security would be an option, but I'd like to know if I can do it without it. EDIT. I'll explain it a bit more. In the logs I have lots of connections trying to use the server as a web proxy (e.g. connections like GET http://zzz.zzz/ HTTP/1.1 where zzz.zzz is an external domain, not mine). This requests are passed on to mod_wsgi which then return a 404 (as per my Django app). If I disable the app, as mod_proxy is disabled, Apache returns the error directly. What I'd finally like to do is prevent Apache from passing the request to the WSGI for invalid domains, that is, if the request is a proxy request, directly return the error and not execute the WSGI app. EDIT2. Here is the apache2 config, using VirtualHosts files in sites-enabled (i have removed email addresses and changed IPs to xxx, change the server alias to sample.sample.xxx). What I'd like is for Apache to reject any request that doesn't go to sample.sample.xxx with and error, that is, accept only relative requests to the server or fully qualified only to the actual ServerAlias. default: <VirtualHost *:80> ServerAdmin [email protected] ServerName X.X.X.X ServerAlias X.X.X.X DocumentRoot /var/www/default <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> ErrorDocument 404 "404" ErrorDocument 403 "403" ErrorDocument 500 "500" ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> actual host: <VirtualHost *:80> ErrorDocument 404 "404" ErrorDocument 403 "403" ErrorDocument 500 "500" WSGIScriptAlias / /var/www/sample.sample.xxx/django.wsgi ServerAdmin [email protected] ServerAlias sample.sample.xxx ServerName sample.sample.xxx CustomLog /var/www/sample.sample.xxx/log/sample.sample.xxx-access.log combined Alias /robots.txt /var/www/sample.sample.xxx/static/robots.txt Alias /favicon.ico /var/www/sample.sample.xxx/static/favicon.ico AliasMatch ^/([^/]*\.css) /var/www/sample.sample.xxx/static/$1 Alias /static/ /var/www/sample.sample.xxx/static/ Alias /media/ /var/www/sample.sample.xxx/media/ <Directory /var/www/sample.sample.xxx/static/> Order deny,allow Allow from all </Directory> <Directory /var/www/sample.sample.xxx/media/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Create a Self Signed Sertificate on WLS 10.3.5 Supporting SHA 256 Algorthim.

    - by adejuanc
    1) Set domain to call the keytool $. setDomainEnv.sh 2) Generate the key $ keytool -genkey -alias selfsignedcert -keyalg RSA -sigalg SHA256withRSA -keypass privatepassword -keystore identity.jks -storepass password -validity 365 What is your first and last name? [Unknown]: adejuan-desktop.cl.oracle.com What is the name of your organizational unit? [Unknown]: a What is the name of your organization? [Unknown]: e What is the name of your City or Locality? [Unknown]: i What is the name of your State or Province? [Unknown]: o What is the two-letter country code for this unit? [Unknown]: U Is CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U correct? [no]: yes 3) Export the root certificate $ keytool -export -alias selfsignedcert -sigalg SHA256withRSA -file root.cer -keystore identity.jks Enter keystore password: Certificate stored in file <root.cer> 4) Import the root certificate to the trust store $ keytool -import -alias selfsignedcert -sigalg SHA256withRSA -trustcacerts -file root.cer -keystore trust.jks Enter keystore password: Re-enter new password: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:33:22CLST 2012 until: Thu Jan 15 15:33:22 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 Trust this certificate? [no]: yes Certificate was added to keystore 5) To check the contents of the keystore keytool -v -list -keystore identity.jks Enter keystore password: ***************** WARNING WARNING WARNING ***************** * The integrity of the information stored in your keystore * * has NOT been verified! In order to verify its integrity, * * you must provide your keystore password. * ***************** WARNING WARNING WARNING ***************** Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry Alias name: selfsignedcert Creation date: Jan 18, 2012 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:42:16CLST 2012 until: Thu Jan 15 15:42:16 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 ******************************************* ******************************************* 6) In some cases, this parameter is needed in the server start up parameters. -Dweblogic.ssl.JSSEEnabled=true Otherwise, enable it from the Server configuration -> SSL -> Use JSSE checkbox.

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • HyperV integration services v3.4 for 12.10?

    - by nlee
    Networking is sloooow with v3.1 How to upgrade to Integration Services v3.4 in 12.10? modinfo output in 12.10 filename: /lib/modules/3.5.0-17-generic/kernel/drivers/hv/hv_vmbus.ko version: 3.1 license: GPL srcversion: B1AA963EEFBAE322D970F14 alias: acpi*:VMBus:* alias: acpi*:VMBUS:* depends: intree: Y vermagic: 3.5.0-17-generic SMP mod_unload modversions

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Bash script for mysql backup - error handling

    - by Jure1873
    I'm trying to backup a bunch of MyISAM tables in a way that would allow me to rsync/rdiff the backup directory to a remote location. I've came up with a script that dumps only the recently changed tables and sets the date of the file so that rsync can pick up only the changed ones, but now I don't know how to do the error handling - I would like the script to exit with a non 0 value if there are errors. How could I do that? #/bin/bash BKPDIR="/var/backups/db-mysql" mkdir -p $BKPDIR ERRORS=0 FIELDS="TABLE_SCHEMA, TABLE_NAME, UPDATE_TIME" W_COND="UPDATE_TIME >= DATE_ADD(CURDATE(), INTERVAL -2 DAY) AND TABLE_SCHEMA<>'information_schema'" mysql --skip-column-names -e "SELECT $FIELDS FROM information_schema.tables WHERE $W_COND;" | while read db table tstamp; do echo "DB: $db: TABLE: $table: ($tstamp)" mysqldump $db $table | gzip > $BKPDIR/$db-$table.sql.gz touch -d "$tstamp" $BKPDIR/$db-$table.sql.gz done exit $ERRORS

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • Application specific environment variable settings

    - by SuperElectric
    I'm trying to work around a known bug in Ubuntu 9.10, where using the scrollbar in emacs causes text to be highlighted, and the cursor to move. This page here shows that you can fix this by setting an environment variable before launching emacs: $ GDK_NATIVE_WINDOWS=1 emacs So a lazy fix would be to alias "emacs" in my .bashrc: alias emacs="GDK_NATIVE_WINDOWS=1 emacs" This, however, has the drawback of setting this environment variable for all subsequent commands run from that shell. Is there any way to set GDK_NATIVE_WINDOWS=1 for just emacs, whenever I run emacs?

    Read the article

  • Help with dual booting Windows 8.1 Professional and Ubuntu 13.10

    - by user1292548
    I recently installed a clean version of Windows 8.1 Professional on my Lenovo Y500 (with Samsung 256GB 840 Pro SSD). I have Windows all set up and running normally. I am trying to dual boot Windows 8.1 and Ubuntu 13.10, but the installation procedure don't allow me to either "Install alongside..." or shows my SSD partitions correctly when I chose the "Something Else" option. I have created a 25GB partition of free space in the Windows disk manager, but on the installation screen on Ubuntu, it shows the whole drive as a free space. I have tried installing with a burned .ISO disk and a bootable USB, the results are the same for both. Windows Disk Management screen: http://imageshack.us/a/img855/9504/59zu.jpg The Ubuntu installation screen: http://imageshack.us/a/img62/2712/9g6i.jpg I've ran into this problem before when trying to dual boot Ubuntu and Windows 7 Professional a month ago. But I gave up and never resolved the issue. --EDIT-- I tried what Eero Aaltonen suggested, and this is my result: ubuntu@ubuntu:~$ sudo parted /dev/sda print Warning: /dev/sda contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No? yes Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags ubuntu@ubuntu:~$

    Read the article

  • SQL language drawbacks, The Third Manifesto

    - by David Portabella
    Sometime ago I read about SQL language drawbacks (the basic language specification, not vendor specific), and one of the drawbacks was that the language does not allow to create a set of tuples that don't come from a table. For instance, SELECT firstName, lastName from people; this creates a set of tuples coming from the table people. Now, if I don't have this table people, and I want to return a constant, I'd need something like this to return a set of two tuples (this would not require to have a table): SELECT VALUES('james', 'dean'), ('tom', 'cruisse'); Why I would need that? Because of the same reasons that we can define constants (not only basic types, but objects and arrays also) in any advanced programming language. Workarounds, Yes, I could create a temporal table, fill the data, and SELECT from that table. This is a hack, to overcome the drawbacks of the poor SQL language. I think that I read about this somewhere in "The Third Manifesto", but I don't find the paragraph/example talking about this concrete drawback anymore. Do you know a reference about it?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >