Search Results

Search found 6776 results on 272 pages for 'ssis 2012'.

Page 115/272 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • "The server closed the connection without sending any data"

    - by Toby
    Server setup The problem Diagnostic information What I've tried Specific Help needed 1. I have the following server setup: Debian Squeeze Linux 2.6.32-5-amd64 Apache2-mpm-prefork 2.2.16-6+squeeze10 PHP 5.3.3-7+squeeze14 This server is protected with the Suhosin Patch 0.9.9.1 Max Requests Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts Connection: 300 - Keep-Alive: 15 Loaded Modules core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_auth_digest mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_reqtimeout mod_rewrite mod_setenvif mod_ssl mod_status Wordpress 3.4.2 (Upgrading to 3.5 soon :) 2. The problem When I restart the server (sudo shutdown -r now), going to any website page results in the following error from the web browser (in this case, Google Chrome, but other browsers also show the same error). This error can also occur an hour or so after all is working ok, seemingly randomly, which is my biggest concern as it means my server is not reliable: No data received Unable to load the web page because the server sent no data. Here are some suggestions: Reload this web page later. Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. 3. Diagnostic information The apache error log contains the folowing entries: [Fri Dec 14 22:23:27 2012] [notice] child pid 1955 exit signal Floating point exception (8) [Fri Dec 14 22:23:27 2012] [notice] child pid 1956 exit signal Floating point exception (8) [Fri Dec 14 22:23:29 2012] [notice] child pid 1957 exit signal Floating point exception (8) [Fri Dec 14 22:23:30 2012] [notice] child pid 1958 exit signal Floating point exception (8) [Fri Dec 14 22:23:32 2012] [notice] child pid 1959 exit signal Floating point exception (8) [Fri Dec 14 22:23:32 2012] [notice] child pid 1960 exit signal Floating point exception (8) [Fri Dec 14 22:23:34 2012] [notice] child pid 1961 exit signal Floating point exception (8) [Fri Dec 14 22:23:34 2012] [notice] child pid 1962 exit signal Floating point exception (8) 4. What I've tried a) I can 'fix' the website temporarily by resetting the server twice (resetting it once does not work) using the following commands. NB: the 'reload' option does not work, I have to use restart twice. However, the error can reoccur sometime later. sudo /etc/init.d/apache2 restart sudo /etc/init.d/apache2 restart b) I have tried disabling suhosin by uninstalling php5-suhosin, but a php info page still shows "This server is protected with the Suhosin Patch 0.9.9.1". I have tried putting Suhosin into simulation mode by creating a file /etc/php5/apache2/conf.d/suhosin.ini containing: [suhosin] suhosin.simulation = On The php info page shows the suhosin.ini file in the list of "Additional .ini files parsed" but the php info page still shows "This server is protected with the Suhosin Patch 0.9.9.1" c) Increasing the PHP memory limit In /etc/php5/apache2/ : ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 512M d) Disabling all Wordpress plugins, and going back to the default theme. 5. Specific help needed I would very much like help in debugging what is going on here. I am not sure how to determine what processes are in the Apache error log which are exiting "[notice] child pid 1955 exit signal Floating point exception (8)", or what is causing them to exit. And whether suhosin is part of the problem (and how to disable it if it is). Thank you in advance for any advice or tips you can offer in helping me debug this.

    Read the article

  • How to properly secure Windows Server 2008 R2 that will host SQL Server 2012?

    - by Max
    I am a .net programmer trying to create this setup: I want this server to be inaccessible through DMZ accept for IPSEC connections, and to also have a private network which will be accessible through another windows 2008 server which will host vpn. That is how our windows 2003 infrastructure works and I am trying to do the same with 2008 servers, are there any guides or documentations that have this scenario?

    Read the article

  • what special issues are at play when loading a config file from the comand prompt with DTExec

    - by Ralph Shillington
    If I run a package from the Management Studio, and specify a configuration file, everything works as expected. However if I try and run the package from the command prompt with DTExec I get the error: Cannot load the XML configuration file. The XML configuration file may be malformed or not valid. The command I'm using to execute the package is: dtexec /conf ConfigurationDemo.dtsConfig /f Package.dtsx I am running the dtexec from the folder where these two files reside. Is there an addtional switch or something that must used to get dtexec to behave the same was at the management Stduio in launching a package?

    Read the article

  • Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

    - by Matt
    I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal). The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc. I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that. I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant. Thanks, Matt

    Read the article

  • Derived Column Editor

    - by Rob Bowman
    Hi I need to assign a formatted date to a column in a data flow. I have added a Derived Column editor and entered the following expression: "BBD" + SUBSTRING((DT_WSTR,4)DATEADD("Day",30,GETDATE()),1,4) + SUBSTRING((DT_WSTR,2)DATEADD("Day",30,GETDATE()),6,2) + SUBSTRING((DT_WSTR,2)DATEADD("Day",30,GETDATE()),9,2) The problem is that the "Derived Column Transformation Editor" automatically assigns a Data Type of "Unicode string[DT_WSTR]" and a length of "7". Howver, the length of a string is 11, therefore the following exception is thrown each time: [Best Before Date [112]] Error: The "component "Best Before Date" (112)" failed because truncation occurred, and the truncation row disposition on "output column "Comments" (132)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. Does anyone know why the edit is insisting on a length of 7? I don't seem to be able to change this. Many thanks, Rob.

    Read the article

  • how to get the sql connection

    - by sweetsecret
    FS_Setting is a VB class which has all the details of the connections ie: Public Class FS_Setting Public Function Get_RS_Connection() As SqlConnection Try Get_RS_Connection = New SqlConnection("Data Source=***********;User ID=sa;Password=*****;database=*********") Catch ex As System.Exception Throw New System.Exception("Get_RS_Connection Error:" + ex.Message) End Try End Function I need to call the function Get_RS_Connection() in a different class instead of getting the connection all the way again and hard coding.... I want to call the above class where the SQL connection is declared Namespace FS_Library Public Class FS_Errorlog Inherits FS_BaseClass Try **cn = New SqlConnection("Data Source=***********;UserID=sa;Password=*****;database=*********")** cmd = New SqlCommand("dbo.FS_ErrorLog_ADD", cn) cmd.CommandType = CommandType.StoredProcedure cmd.CommandTimeout = Convert.ToInt32(ConfigurationSettings.AppSettings("Command_Timeout")) Me.AddParameter(cmd, "@p_tableKey", SqlDbType.Int, tableKey) Me.AddParameter(cmd, "@p_FunctionCode", SqlDbType.Int, FunctionCode) Me.AddParameter(cmd, "@p_TableAlias", SqlDbType.VarChar, TableAlias) Me.AddParameter(cmd, "@p_ValidationCode", SqlDbType.Int, ValidationCode) If Filename = "" Then Filename = "N/A" End If Me.AddParameter(cmd, "@p_FileName", SqlDbType.VarChar, Filename) Me.AddParameter(cmd, "@p_Message", SqlDbType.VarChar, Message) Me.AddParameter(cmd, "@p_CreateUser", SqlDbType.VarChar, userID) Me.AddParameter(cmd, "@p_UserActionID", SqlDbType.UniqueIdentifier, UserActionID) cn.Open()

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • SQL Server - CAST AND DIVIDE

    - by rs
    DECLARE @table table(XYZ VARCHAR(8) , id int) INSERT INTO @table SELECT '4000', 1 UNION ALL SELECT '3.123', 2 UNION ALL SELECT '7.0', 3 UNION ALL SELECT '80000', 4 UNION ALL SELECT NULL, 5 SELECT CASE WHEN PATINDEX('^[0-9]{1,5}[\.][0-9]{1,3}$', XYZ) = 0 THEN XYZ WHEN PATINDEX('^[0-9]{1,8}$',XYZ) = 0 THEN CAST(XYZ AS decimal(18,3))/1000 ELSE NULL END FROM @table This part - CAST(XYZ AS decimal(18,3))/1000 doesn't divide value it gives me more number of zeros after decimal instead of dividing it. (I even enclosed that in brackets and tried but same result) Am i doing something wrong here?

    Read the article

  • SQL Server Agent 2005 job runs but no output

    - by alimack
    Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though). The job steps are: 1) Delete all rows from table; 2) Use For each loop to fill up table from Excel spreasheets; 3) Clean up table. I've tried this [MS page][1] (steps 1 & 2), didn't see any need to start changing from Server side security. Also SQLServerCentral.com for [this page][2], no resolution. How can I get error logging or a fix? Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming. I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?

    Read the article

  • how to get external variable value in dtsx package.

    - by Rishabh
    Hi, I am executing .dtsx package from c#, it was executing fine, if i am passing one variable value from c# code then how can i get it on .dtsx package for my ole db source query. Here is my c# code. string file = @"D:\CYNCZFuzzy\CYNCZFuzzy\Contact.dtsx"; package = app.LoadPackage(file, null); Variables vars = package.Variables; vars["User::parentContactID"].Value = 1028203; pkgResults = package.Execute(); string result = pkgResults.ToString(); I need this 1028203 value on my ole db source query, here my query. select cr.MasterContactID as ParentContactID, c.ID,C.FirstName, C.MiddleName, c.LastName, c.ID as FieldID from Contact c inner join ContactRelation cr on cr.SlaveContactID = c.ID where RelationshipID = 1 AND cr.MasterContactID = ? what I should write on ? for getting 1028203 value from c# page. Thanks in advance...

    Read the article

  • Syntax Error in showing Error Description

    - by Sreejesh Kumar
    What is the correct Syntax to be applied for "@[System::ErrorDescription]" inside the query like "INSERT" ? I am unable to retrieve the correct Error Description inside the table, as the result in the table is showing as "@[System::ErrorDescription]". I am not getting the result !

    Read the article

  • Return dataset in dataflow

    - by praveen
    Hi All, Could I get ideas on retrieving the dataset using lookup method. Basically, my scenario as I have source data needs to lookup for other source table and on matching column from source I need to get all the records from other source data. its a one to many relations. I tried Lookup but gives only one record on matching condition, OLE DB command don't retrieve any data as it will do only Insert/Update operations. Thanks prav

    Read the article

  • how to choose which row to insert with same id in sql?

    - by user1429595
    so Basically I have a table called "table_1" : ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 16 pending 1:05 still in request 1 17 pending 1:10 still in request 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 20 pending 2:30 in progress 2 21 pending 2:35 in progess still 2 22 pending 2:40 still pending 2 23 complete 2:45 Transaction Compeleted I need to insert these data into my second table "table_2" where only start and compelete times are included, so my "table_2" should like this: ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 23 complete 2:45 Transaction Compeleted if anyone can help me write sql query for this I would highly appreciate it. Thanks in advance

    Read the article

  • Handling primary key duplicates in a data warehouse load

    - by Meff
    I'm currently building an ETL system to load a data warehouse from a transactional system. The grain of my fact table is the transaction level. In order to ensure I don't load duplicate rows I've put a primary key on the fact table, which is the transaction ID. I've encountered a problem with transactions being reversed - In the transactional database this is done via a status, which I pick up and I can work out if the transaction is being done, or rolled back so I can load a reversal row in the warehouse. However, the reversal row will have the same transaction ID and so I get a primary key violation. I've solved this for now by negating the primary key, so transaction ID 1 would be a payment, and transaction ID -1 (In the warehouse only) would be the reversal. I have considered an alternative of generating a BIT column, where 0 is normal and 1 is reversal, then making the PK the transaction ID and the BIT column. My question is, is this a good practice, and has anyone else encountered anything like this? For reference, this is a payment processing system, so values will not be modified, so there will only ever be transactions and reversals.

    Read the article

  • Package start and end times

    - by abkl
    Hi , I wanted to know if there is any system variable which is can use to access package execution start and end times. My requirement is that i need to store this in 2 relevant fields in a table in the database. Thank you. Abhi

    Read the article

  • Package Confugurations

    - by Sreejesh Kumar
    I had created a package with necessary connections. And I had enabled Package Configurations. But I am unable to execute the package. Error is shown. And its related to connection failure , I think.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Row Count Plus Transformation

    As the name suggests we have taken the current Row Count Transform that is provided by Microsoft in the Integration Services toolbox and we have recreated the functionality and extended upon it. There are two things about the current version that we thought could do with cleaning up Lack of a custom UI You have to type the variable name yourself In the Row Count Plus Transformation we solve these issues for you. Another thing we thought was missing is the ability to calculate the time taken between components in the pipeline. An example usage would be that you want to know how many rows flowed between Component A and Component B and how long it took. Again we have solved this issue. Credit must go to Erik Veerman of Solid Quality Learning for the idea behind noting the duration. We were looking at one of his packages and saw that he was doing something very similar but he was using a Script Component as a transformation. Our philosophy is that if you have to write or Copy and Paste the same piece of code more than once then you should be thinking about a custom component and here it is. The Row Count Plus Transformation populates variables with the values returned from; Counting the rows that have flowed through the path Returning the time in seconds between when it first saw a row come down this path and when it saw the final row. It is possible to leave both these boxes blank and the component will still work.   All input columns are passed through the transformation unaltered, you are not permitted to change or add to the inputs or outputs of this component. Optionally you can set the component to fire an event, which happens during the PostExecute phase of the execution. This can be useful to improve visibility of this information, such that it is captured in package logging, or can be used to drive workflow in the case of an error event. Properties Property Data Type Description OutputRowCountVariable String The name of the variable into which the amount of row read will be passed (Optional). OutputDurationVariable String The name of the variable into which the duration in seconds will be passed. (Optional). EventType RowCountPlusTransform.EventType The type of event to fire during post execute, included in which are the row count and duration values. RowCountPlusTransform.EventType Enumeration Name Value Description None 0 Do not fire any event. Information 1 Fire an Information event. Warning 2 Fire a Warning event. Error 3 Fire an Error event. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Count Plus Transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Count Plus Transformation for SQL Server 2005 Row Count Plus Transformation for SQL Server 2008 Row Count Plus Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.1.0.43 - Bug fix for duration. For long running processes the duration second count may have been incorrect. (8 Sep 2006) Version 1.1.0.42 - SP1 Compatibility Testing. Added the ability to raise an event with the count and duration data for easier logging or workflow. (18 Jun 2006) Version 1.0.0.1 - SQL Server 2005 RTM. Made available as general public release. (20 Mar 2006) Screenshot Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >