Search Results

Search found 23949 results on 958 pages for 'test test'.

Page 725/958 | < Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >

  • XNA 4.0: 2D Camera Y and X are going in wrong direction

    - by Setheron
    I asked this question on stackoverflow but assumed this might be a better area to ask it as well for a more informed answer. My problem is that I am trying to create a camera class and have it so that my camera follows the proper RHS, however the Y axis seems to be inverted since on the screen the 0 starts at the top. Here is my Camera2D Class: class Camera2D { private Vector2 _position; private float _zoom; private float _rotation; private float _cameraSpeed; private Viewport _viewport; private Matrix _viewMatrix; private Matrix _viewMatrixIverse; public static float MinZoom = float.Epsilon; public static float MaxZoom = float.MaxValue; public Camera2D(Viewport viewport) { _viewMatrix = Matrix.Identity; _viewport = viewport; _cameraSpeed = 4.0f; _zoom = 1.0f; _rotation = 0.0f; _position = Vector2.Zero; } public void Move(Vector2 amount) { _position += amount; } public void Zoom(float amount) { _zoom += amount; _zoom = MathHelper.Clamp(_zoom, MaxZoom, MinZoom); UpdateViewTransform(); } public Vector2 Position { get { return _position; } set { _position = value; UpdateViewTransform(); } } public Matrix ViewMatrix { get { return _viewMatrix; } } private void UpdateViewTransform() { Matrix proj = Matrix.CreateTranslation(new Vector3(_viewport.Width * 0.5f, _viewport.Height * 0.5f, 0)) * Matrix.CreateScale(new Vector3(1f, 1f, 1f)); _viewMatrix = Matrix.CreateRotationZ(_rotation) * Matrix.CreateScale(new Vector3(_zoom, _zoom, 1.0f)) * Matrix.CreateTranslation(_position.X, _position.Y, 0.0f); _viewMatrix = proj * _viewMatrix; } } I test it using SpriteBatch in the following way: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Vector2 position = new Vector2(0, 0); // TODO: Add your drawing code here spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null, camera.ViewMatrix); Texture2D circle = CreateCircle(100); spriteBatch.Draw(circle, position, Color.Red); spriteBatch.End(); base.Draw(gameTime); }

    Read the article

  • Correct syntax in stored procedure and method using MsSqlProvider.ExecProcedure? [migrated]

    - by Dudi
    I have problem with ASP.net and database prcedure My procedure in mssql base USE [dbase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[top1000] @Published datetime output, @Title nvarchar(100) output, @Url nvarchar(1000) output, @Count INT output AS SET @Published = (SELECT TOP 1000 dbo.vst_download_files.dfl_date_public FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC ) SET @Title = (SELECT TOP 1000 dbo.vst_download_files.dfl_name FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC) SET @Url = (SELECT TOP 1000 dbo.vst_download_files.dfl_source_url FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC) SET @Count = (SELECT TOP 1000 dbo.vst_download_files.dfl_download_count FROM dbo.vst_download_files ORDER BY dbo.vst_download_files.dfl_download_count DESC) And my proceduer in website project public static void Top1000() { List<DownloadFile> List = new List<DownloadFile>(); SqlDataReader dbReader; SqlParameter published = new SqlParameter("@Published", SqlDbType.DateTime2); published.Direction = ParameterDirection.Output; SqlParameter title = new SqlParameter("@Title", SqlDbType.NVarChar); title.Direction = ParameterDirection.Output; SqlParameter url = new SqlParameter("@Url", SqlDbType.NVarChar); url.Direction = ParameterDirection.Output; SqlParameter count = new SqlParameter("@Count", SqlDbType.Int); count.Direction = ParameterDirection.Output; SqlParameter[] parm = {published, title, count}; dbReader = MsSqlProvider.ExecProcedure("top1000", parm); try { while (dbReader.Read()) { DownloadFile df = new DownloadFile(); //df.AddDate = dbReader["dfl_date_public"]; df.Name = dbReader["dlf_name"].ToString(); df.SourceUrl = dbReader["dlf_source_url"].ToString(); df.DownloadCount = Convert.ToInt32(dbReader["dlf_download_count"]); List.Add(df); } XmlDocument top1000Xml = new XmlDocument(); XmlNode XMLNode = top1000Xml.CreateElement("products"); foreach (DownloadFile df in List) { XmlNode productNode = top1000Xml.CreateElement("product"); XmlNode publishedNode = top1000Xml.CreateElement("published"); publishedNode.InnerText = "data dodania"; XMLNode.AppendChild(publishedNode); XmlNode titleNode = top1000Xml.CreateElement("title"); titleNode.InnerText = df.Name; XMLNode.AppendChild(titleNode); } top1000Xml.AppendChild(XMLNode); top1000Xml.Save("\\pages\\test.xml"); } catch { } finally { dbReader.Close(); } } And if I made to MsSqlProvider.ExecProcedure("top1000", parm); I got String[1]: property Size has invalid size of 0. Where I shoudl look for solution? Procedure or method?

    Read the article

  • PowerShell Import DnsShell Module

    - by krousemw
    So here's the list of available modules in this directory. As you can see DnsShell is there. PS C:\windows\system32> Get-Module -ListAvailable Directory: C:\windows\system32\WindowsPowerShell\v1.0\Modules ModuleType Name ExportedCommands ---------- ---- ---------------- Manifest ActiveDirectory {Get-ADRootDSE, New-ADObject, Rename- ADObject, Move-ADObject...} Manifest AppLocker {Set-AppLockerPolicy, Get-AppLockerPolicy, Test-AppLockerPolicy, Get-AppLo... Manifest BitsTransfer {Add-BitsFile, Remove-BitsTransfer, Complete-BitsTransfer, Get-BitsTransfe... Manifest CimCmdlets {Get-CimAssociatedInstance, Get-CimClass, Get-CimInstance, Get-CimSession...} Binary DnsShell Script ISE {New-IseSnippet, Import-IseSnippet, Get- IseSnippet} Manifest Microsoft.PowerShell.Diagnostics {Get-WinEvent, Get-Counter, Import-Counter, Export-Counter...} Manifest Microsoft.PowerShell.Host {Start-Transcript, Stop-Transcript} Manifest Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear- ItemProperty, Join-Path...} Manifest Microsoft.PowerShell.Security {Get-Acl, Set-Acl, Get-PfxCertificate, Get-Credential...} Manifest Microsoft.PowerShell.Utility {Format-List, Format-Custom, Format-Table, Format-Wide...} Manifest Microsoft.WSMan.Management {Disable-WSManCredSSP, Enable- WSManCredSSP, Get-WSManCredSSP, Set-WSManQui... Script PSDiagnostics {Disable-PSTrace, Disable- PSWSManCombinedTrace, Disable-WSManTrace, Enable... Binary PSScheduledJob {New-JobTrigger, Add-JobTrigger, Remove-JobTrigger, Get-JobTrigger...} Manifest PSWorkflow {New-PSWorkflowExecutionOption, New-PSWorkflowSession, nwsn} Manifest PSWorkflowUtility Invoke-AsWorkflow Manifest TroubleshootingPack {Get-TroubleshootingPack, Invoke-TroubleshootingPack} When I run the command to Import-Module DnsShell, I get this error and I dont know why.. PS C:\windows\system32> Import-Module DnsShell Import-Module : Could not load file or assembly 'file:///C:\windows\system32\WindowsPowerShell\v1.0\Modules\DnsShell\DnsShell.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) At line:1 char:1 + Import-Module DnsShell + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Import-Module], FileLoadException + FullyQualifiedErrorId : System.IO.FileLoadException,Microsoft.PowerShell.Commands.ImportModuleCommand Note: I would have posted pictures but I needed a rep of at least 10 in serverfault

    Read the article

  • Watchguard SSL Certificate problems

    - by Bill Best
    We recently purchased a Watchguard XTM 510. The hope is to replace our ISA 2006 proxy with this UTM product. We are having some issues with secured sites in our test setup. Currently We are still running traffic through the ISA server and I have the Watchguard also setup to be connected to the network. Where we run into problems is when I set in ISA the HTTPS site's location to be forwarded through the XTM, I get a certificate could not be validated error. Therefore I think Ive narrowed it down to two possibilities. One, the certificate needs to be installed on the XTM. Im not 100% sure this is the case as I believe this should just be acting as strictly a proxy and forwarding all the traffic through no questions asked. Either way if I try to import a certificate to the XTM I always get a certificate validation failed error message. These are generally converted pfx to pem files. Second, the XTM CA certificate needs to be installed on the ISA server so that they may communicate. I have done this but it didn't seem to do anything. I believe this should be working and was hoping someone has struggled through this before.

    Read the article

  • Rebuilding array on 3ware 9690SA-8I

    - by Tim Jones
    I have a RAID10 (8x1TB) array on a 3ware 9690 card running on an Ubuntu 1110 server. There was a kernel update so I scheduled a reboot after which the array was inaccessible. I checked the status a drive has died in the array, but the controller has thrown the entire array into an 'inoperable' state instead of simply degraded (what's the point of the RAID now ;-). After taking out the 'dead' drive I run a quick test to find it completely functional without a bad sector to be found. I try to put the drive back in but the array still marks the disk as degraded (remembering serial number or something??) and the entire array as inoperable... So I swap it out for a known working drive (not the same capacity but higher - should still work) and initiate a rebuild with the the new drive as a replacement. This fails instantly with the error "(0x0B:0x0033): Unit busy : Failed to start Rebuild on Unit 0". The unit shouldn't be busy as it is not mounted (the card itself is listed with lshw but the array it provides is not). I'm pretty much at an impasse now, I don't understand how I can have a single drive failure on a RAID10 that makes the entire array inaccessible, degraded I could understand but inaccessible?? I don't think the controller as prior to the reboot it was completely functional.

    Read the article

  • Unpacking a ZTE ZXV10 H108L router firmware

    - by v3ng3ful
    I binwalked a firmware of a ZTE ZXV10 H108L, and got some encouraging results of uImage uboot, and LZMA compression, as well as a Squashfs 3.0 LZMA compressed filesystem. 256 0x100 uImage header, header size: 64 bytes, header CRC: 0xE70BCBB9, created: Thu Nov 10 04:54:54 2011, image size: 804172 bytes, Data Address: 0x80002000, Entry Point: 0x80266000, data CRC: 0x6EFE90F1, OS: Linux, CPU: MIPS, image type: OS Kernel Image, compression type: lzma, image name: MIPS Linux-2.6.20 320 0x140 LZMA compressed data, properties: 0x5D, dictionary size: 8388608 bytes, uncompressed size: 2637958 bytes 851968 0xD0000 Squashfs filesystem, big endian, lzma signature, version 3.0, size: 2543403 bytes, 632 inodes, blocksize: 65536 bytes, created: Thu Nov 10 04:56:12 2011 Now what I did is, to test several portions of the file (320byte-end, 851968byte-end, and many more) using dd, and trying with certain tools to uncompress/unpack the filesystem of the firmware. After some digging I found out the best tool to do this is the firmware_mod_kit, that understands a squashfs-lzma v3 filesystem. Although I ended up really frustrated as unsquashfs-lzma v3 reported a cold "zlib::uncompress failed, unknown error -3". Do you have any ideas? Could it be that, the firmware is corrupted on purpose to discourage attempts like this? Router file Thanks

    Read the article

  • Active directory over SSL Error 81 = ldap_connect(hLdap, NULL);

    - by Kossel
    I have been several day to getting AD over SSL (LDAPS) I followed exactly this guide. I have Active Directory Certifica Service installed (stand alone Root CA), I can request cert, install certs. but whenever I want to test the connection using LDP.exe I got this famous error ld = ldap_sslinit("localhost", 636, 1); Error 0 = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, 3); Error 81 = ldap_connect(hLdap, NULL); Server error: <empty> Error <0x51>: Fail to connect to localhost. I have been searching, I know there are many thing can cause of this error, I tried most thing I can then I decided to post it here. I tried to look if any error in system log, but nothing :/ (but I could be wwrong) can anyone tell me what else to look? UPDATE: I restarted AD service following error showed in event viewer: LDAP over Secure Sockets Layer (SSL) will be unavailable at this time because the server was unable to obtain a certificate. Additional Data Error value: 8009030e No credentials are available in the security package

    Read the article

  • Teamcity build agent gives 504 gateway timeout

    - by Anthony
    I have a new teamcity build agent machine, which when started up tries to connect to the build server and fails. It never shows up in the connected, disconnected or unauthorised agents tabs of the build server web interface. The logs on the build agent show that it fails to connect with a 504 gateway timeout. This is from teamcity-agent.log [2012-09-04 15:34:59,776] INFO - buildServer.AGENT.registration - Registering on server http://10.0.10.16, AgentDetails{Name='my-local', AgentId=null, BuildId=null, AgentOwnAddress='10.0.1.14', AlternativeAddresses=[10.0.10.32], Port=8080, Version='21424', PluginsVersion='21424-md5-somechecksum', AvailableRunners=[ABunchOfPlugins], AvailableVcs=[SomeRunners], AuthorizationToken='sometoken'} [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Call http://10.0.10.16/RPC2 buildServer.registerAgent3: org.apache.xmlrpc.XmlRpcClientException: Server returned incorrect status code: 504 Gateway Time-out [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Connection to TeamCity server is probably lost. Will be trying to restore it. Take a look at logs/teamcity-agent.log for details (unless you're using custom logging). (I have edited some identifying data out of this log excerpt) But I can reach the build server. In fact, tracert shows that it is very nearby. Tracing route to TEAMCITYSERVER [10.0.10.16] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 10.0.2.1 2 <1 ms <1 ms <1 ms TEAMCITYSERVER [10.0.10.16] Trace complete. I can see a teamcity login page if I hit http://10.0.10.16 in the browser. The teamcity service is logging in as the same (local administrator) account as I used to log in and test the network. The build agent is a windows 2008 server VM hosted on Ubuntu 12.04 under Oracle VirtualBox. I have disabled firewalls on both the Windows and Ubuntu machines. Other VMS with similar configuration can connect fine and do not report this error. What can possibly be preventing this connection?

    Read the article

  • Cant connect to mysql using self signed SSL certificate

    - by carpii
    After creating a self-signed SSL certificate, I have configured my remote mysqld to use them (and ssl is enabled) I ssh into my remote server, and try connecting to its own mysqld using ssl (mysql server is 5.5.25).. ~> mysql -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Ok, I remember reading theres some problem with connecting to the same server via SSL. So I download the client keys down to my local box, and test from there... ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error Its unclear what this "SSL connection error" error refers to, but if I omit the -ssl-ca, then I am able to connect using SSL.. ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 37 Server version: 5.5.25 MySQL Community Server (GPL) However, I believe that this is only encrypting the connection, and not actually verifying the validity of the cert (meaning I would be potentially vulnerable to man-in-middle attack) The ssl certs are valid (albeit self signed), and do not have a passphrase on them So my question is, what am I doing wrong? How can I connect via SSL, using a self signed certificate? MySQL Server version is 5.5.25 and the server and clients are Centos 5 Thanks for any advice Edit: Note that in all cases, the command is being issued from the same directory where the ssl keys reside (hence no absolute path)

    Read the article

  • Squid on an Azure VM

    - by LantisGaius
    I can't get it to work. Here's exactly what I did: Create a new Azure VM, Windows Server 2012. RDP to the new VM Download & Extract Squid for Windows (2.7.STABLE8) Rename the conf files (squid, mime & cachemgr) Add the following lines on the end of squid.conf auth_param basic program c:/squid/libexec/ncsa_auth.exe c:/squid/etc/passwd.txt auth_param basic children 5 auth_param basic realm Welcome to http://abcde.fg Squid Proxy! auth_param basic credentialsttl 12 hours auth_param basic casesensitive off acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users Use http://www.htaccesstools.com/htpasswd-generator-windows/ to create passwd.txt Test passwd.txt via c:/squid/libexec/ncsa_auth.exe c:/squid/etc/passwd.txt (success) squid -z squid -i net start squid (No errors so far). go to https://manage.windowsazure.com, Virtual Machines - myVM - Endpoints Add Endpoint: Name: Squid Protocol: TCP Public Port: 80 Private Port: 3128 That's it. Unfortunately, it doesn't work. I think I screwed something up at the endpoint? I'm not sure.. help? EDIT: I'm testing it via Firefox - Options - Advanced - Network, and the exact error is "The Proxy Server is refusing connections." I'm using my DNS as the Proxy server "abcdef.cloudapp.net" and port 80 (since that's my public endpoint).

    Read the article

  • How to solve "Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\\... " Error?

    - by Kiran Rs
    I have a contact page where users can contact me via that form. But I'm getting this error, Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\INETPUB\VHOSTS\nextoption.in\httpdocs\auto-replay\contact.php on line 33 My Php code is, if(isset($_POST['send'])) //if "email" is filled out, send email { //send email $email1=$_POST['email']; $headers = "From: My site\r\n"; $headers .= "Reply-To: [email protected]\r\n"; $headers .= "Return-Path: [email protected]\r\n"; $headers .= "X-Mailer: Drupal\n"; $headers .= 'MIME-Version: 1.0' . "\n"; $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $to = "[email protected]"; $subject = "Test mail"; $message = "Hello!This is a simple email message $from = $email1; mail($to,$subject,$message,$headers); ? alert ("Enquiry form submited successfully ! We'll get back you soon "); What will be my fault..... What is the Fault in SMTP Server?

    Read the article

  • Strange permission errors in new PostgreSQL installation

    - by Bart van Heukelom
    A freshly installed PostgreSQL (with configuration overwritten) won't start: $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * Error: could not read /etc/postgresql/9.1/main/postgresql.conf: Permission denied Looks like it should be able to read it though: $ ls -l postgresql.conf -rw------- 1 postgres postgres 19450 2012-06-14 10:07 postgresql.conf But fine, I'll add chmod +r it to test if that works. $ sudo chmod +r postgresql.conf $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * The PostgreSQL server failed to start. Please check the log output: Error: Could not open log file /var/log/postgresql/postgresql-9.1-main.log Huh? $ ls -l /var/log/postgresql/ total 4 -rw-r----- 1 postgres adm 827 2012-06-14 10:07 postgresql-9.1-main.log I don't get it. What can be wrong here? This used to work before. Can I maybe monitor what process attempts to open the file? It's Ubuntu 11.10 on EC2, using Chef. For completeness, here's the recipe: # Install PostgreSQL package "postgresql-9.1" # Stop server service "postgresql" do action :stop end # Overwrite configuration (setting data dir) template "/etc/postgresql/9.1/main/postgresql.conf" do source "postgresql-conf.erb" owner "postgres" group "postgres" end # Start server service "postgresql" do action :start end

    Read the article

  • Intermittent Windows Server 2008 BSOD and restart

    - by Timka
    Our EC2 Instance (Windows Server 2008) crashed multiple times for the past 3 months (last time was today at 1:05 EST). Upon reviewing MEMORY.DMP file we noticed that possible cause of the crashes is rhelnet.sys (RedHat PV NIC Driver). Server's Event Viewer has the following records right after the crash: Critical - Kernel Power: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. BugCheck: The computer has rebooted from a bugcheck. The bugcheck was: 0x000000d1 (0x000000000000002d, 0x0000000000000002, 0x0000000000000000, 0xfffff88001402d14). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: 100113-35849-01. Could this be a hardware issue? Would it help if we stop and start the instance? Or is this more likely that this is caused by the software running on the system? [Update 10.01.2013] Amazon Rep suggested to update RH drivers to Citrix PV drivers on our instance: Upgrading PV Drivers [Update 10.08.2013] We performed a drivers upgrade on the cloned instance. Right after the upgrade we noticed the following errors in our Event viewer: Xennet6 errors in Event Viewer (Event ID# 5001) After digging a bit more I found this article suggesting to install the latest Citrix drivers. Unfortunately, this didn't help us at all and our cloned instance became unresponsive. [Update 10.08.2013 2] I recreated an instance and updated PV drivers again. After searching on Internet I found this article where Amazon Rep explains that: "Event ID 5001 from source Xennet6 cannot be found" message does not indicate anything wrong, just that the PV driver is looking for a feature that we have not implemented in our version of Xen. I will keep my test system running for a while to see if there any issues with it.

    Read the article

  • Varnish 3.0.2 and ISPConfig 3.0.4

    - by Warren Bullock III
    I followed the tutorial The Perfect Server - Ubuntu 11.10 [ISPConfig 3] here. I'm running an Ubuntu 11.04 (Natty Narwhal) server with 1024 RAM on Rackspace. I've gone through and updated to ISPConfig 3.0.4. Everything has been working great up to now when I decided to try and install Varnish. Initially I did an install of Varnish by issuing: apt-get update apt-get upgrade apt-get install varnish Apparently the version that was installed was Varnish 2.x so I went back and added the repositories for packages provided by varnish-cache.org curl http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add - echo "deb http://repo.varnish-cache.org/ubuntu/ lucid varnish-3.0" >> /etc/apt/sources.list apt-get update apt-get install varnish This updated my version of Varnish to 3.0.2 I then proceeded to make the following changes: vim /etc/default/varnish change DAEMON_OPTS to port 80: vim /etc/apache2/ports.conf NameVirtualHost *:8000 Listen 8000 vim /etc/apache2/sites-available/default <VirtualHost *:8000> vim /etc/apache2/sites-available/ispconfig.vhost Listen 8080 NameVirtualHost *:8080 <VirtualHost _default_:8080> I then proceeded to set my other vhosts to use 8000 (the apache2 port) so with all this set I reset both Apache2 and Varnish to test. I used Firebug in Firefox 11.0 The output from what I see doesn't seem to indicate that Varnish is working completely correct: First of all I see: X-Varnish 1644834493 but I've heard that unless you have two timestamps side by side than it's probably not working correctly so for example I was thinking I might see something like: X-Varnish 1644834493 1644837493 Also if I noticed this in the output which seems to be inconstant: X-Drupal-Cache MISS There are times when it will say HIT as well.... So the question here that I have is I think Varnish is partially working, however, why don't I see two timestamps on X-Varnish like I'm thinking I should and does the output of the screenshot I have look correct? If Varnish isn't working can someone tell me what I might being doing wrong? Thanks in advance.

    Read the article

  • curl http_code of 000

    - by Mikkel Paulson
    I have a shell script that I use to monitor loading times and response codes on my live server cluster. It runs a total of 250 iterations every 5 minutes, distributed across 10 servers and 6 sites. It uses curl with the -w flag to return pertinent information which is then parsed by my shell script: curl -svw 'monitor_load_times %{time_total} %{http_code}' -b 'server=$server' -m 15 -o /dev/null $url 2>&1 This information is then parsed by a graphing script that can display a number of different responses. However, curl will occasionally return a response code of "000". When this happens, it seems to happen multiple times at once despite being distributed over many iterations: What I'm trying to work out is if this is a client-side issue that's skewing my results or if it's actually indicative of a server-side problem affecting my entire cluster. Does 000 mean that the connection was dropped? Database entries corresponding to curl iterations with that response code return "0.000" for the time_total value. All of the search results I've found for curl returning a code of 000 are related to HTTPS being unsupported, but all of my test URLs are HTTP. (The spike in 500 errors is a completely unrelated issue that affected my servers last night.)

    Read the article

  • How can I set IIS Application Pool recycle times without resorting to the ugly syntax of Add-WebConfiguration?

    - by ObligatoryMoniker
    I have been scripting the configuration of our IIS 7.5 instance and through bits and pieces of other peoples scripts I have come up with a syntax that I like: $WebAppPoolUserName = "domain\user" $WebAppPoolPassword = "password" $WebAppPoolNames = @("Test","Test2") ForEach ($WebAppPoolName in $WebAppPoolNames ) { $WebAppPool = New-WebAppPool -Name $WebAppPoolName $WebAppPool.processModel.identityType = "SpecificUser" $WebAppPool.processModel.username = $WebAppPoolUserName $WebAppPool.processModel.password = $WebAppPoolPassword $WebAppPool.managedPipelineMode = "Classic" $WebAppPool.managedRuntimeVersion = "v4.0" $WebAppPool | set-item } I have seen this done a number of different ways that are less terse and I like the way this syntax of setting object properties looks compared to something like what I see on TechNet: Set-ItemProperty 'IIS:\AppPools\DemoPool' -Name recycling.periodicRestart.requests -Value 100000 One thing I haven't been able to figure out though is how to setup recycle schedules using this syntax. This command sets ApplicationPoolDefaults but is ugly: add-webconfiguration system.applicationHost/applicationPools/applicationPoolDefaults/recycling/periodicRestart/schedule -value (New-TimeSpan -h 1 -m 30) I have done this in the past through appcmd using something like the following but I would really like to do all of this through powershell: %appcmd% set apppool "BusinessUserApps" /+recycling.periodicRestart.schedule.[value='01:00:00'] I have tried: $WebAppPool.recycling.periodicRestart.schedule = (New-TimeSpan -h 1 -m 30) This has the odd effect of turning the .schedule property into a timespan until I use $WebAppPool = get-item iis:\AppPools\AppPoolName to refresh the variable. There is also $WebappPool.recycling.periodicRestart.schedule.Collection but there is no add() function on the collection and I haven't found any other way to modify it. Does anyone know of a way I can set scheduled recycle times using syntax consistent with the code I have written above?

    Read the article

  • HYPER-V R2 Can not mount ISO from network location (UNC Path)

    - by Entity_Razer
    So, as the name suggest I'm trying to mount a ISO from a network share using the UNC path to a HYPER-V R2 Cluster. This is a pure Demo / test case setup with: 2x HYPER-V R2 1X NAS/iSCSI CSV Cluster Management is happening through the MMC with RSAT tools. So what i've done so far is: Set up the cluster and configure Quorum, add CSV Shares and disks, set up 1 Virtual Machine on the Hyper-1 node. What i'm trying to do is, you go to settings --- DVD Drive --- use network location ---- Pick ISO file and press "apply". Error I'm getting is either "User account does not have rights to mount iso". I changed that or stopped getting that message when I went to the HYPER-V Node settings and tabbed on: "Use Default Credentials Automatically". Now I stopped getting the "user does not have right..." message but I get the following: Error applying DVD Drive Changes Failed to remove device microsoft synthetic DVD Drive:" the specified network resource or device is no longer available" I've google'd the problem but am unable to find a solution. Anyone here able to help me out ? Much abbliged !

    Read the article

  • Exchange 2010 DAG Automatic Failover Testing/Issue. Not always automatically failing over to health

    - by Richard
    Ok I've got 2 exchange 2010 servers that run client access/hub transport/mailbox roles and one exchange 2010 server running just client access/hub transport roles and acts as my bridgehead. The two mailbox servers are running one database setup in a DAG. Server A shows the DB Mounted and Server B shows Healthy. If I reboot Server A via windows GUI Server B switches from healthy to mounted and I see hardly any interruption in service using Outlook 2007. Server A shows "Service down", then "Failed" then "Healthy" and leaves the DB mounted on Server B. This is how it should work, so far so good. Now if I test Server A being shut down cold, or unplugging both nics from network to simulate failure, Server B switches from Healthy to Mounted and server A switches to "Service Down" but my outlook client never connects to the DB mounted on server B! I can connect to server C (client access/hub transport) and get to my email and even send new email out, but incoming email doesn't deliver until Server A is brought back online and it's DB goes back to Healthy status. So I don't understand why it auto fail-overs when I reboot the server with the mounted DB copy, causing very little outlook 2007 hiccup if any. But when I shutdown or DC the mounted DB server it DOES mount the healthy copy but outlook 2007 clients can't connect.. I hope the picture I'm trying to paint makes some sense, it's driving me a little batty. Any help would be appreciated!

    Read the article

  • Windows Server 2008 R2 DFSR Backlog Troubleshooting - Where to look for the cause of the problem?

    - by caleban
    Our target server indicates it has hundreds of thousands of backlogged transactions. Our authoritative source server indicates it has no backlogged transactions. No replication is taking place. Tests with plain text files aren't replicating. dfsdiag propogation tests fail to propogate. I've restarted the DFS services. I've restarted the servers. I've created new DFS shares to test with. The authoritative source server indicates it has no backlogs and the target indicates it has backlogs (which are the files it's waiting to receive). Files don't replicate in either direction. 2x Windows Server 2008 R2 Standard servers One server is at each of two sites The DFSR shares are on each respective server \site_1_server_1\users \site_2_server_1\users The sites are connected by a T1 DFSR worked for a week. I added a new share, another folder on the same servers, and that replicated for a weekend but never finished. Then all replication stopped. Is Windows DFSR flaky? What tools should I use and what should I look at to identify what's causing this problem?

    Read the article

  • Slackware Linux: Glib can't find libffi.so.6. Where is it trying to look?

    - by Mathmagician
    I have libffi installed and it's in /usr/local/lib, yet the glib make process can't find it /home/mathmagi/src/glib-2.32.4/gio/.libs/lt-glib-compile-resources: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory /home/mathmagi/src/glib-2.32.4/gio/.libs/lt-glib-compile-resources: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory /home/mathmagi/src/glib-2.32.4/gio/.libs/lt-glib-compile-resources: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory /home/mathmagi/src/glib-2.32.4/gio/.libs/lt-glib-compile-resources: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory make[4]: Entering directory `/home/mathmagi/src/glib-2.32.4/gio/tests' GEN gdbus-test-codegen-generated.c GEN test_resources.c /home/mathmagi/src/glib-2.32.4/gio/.libs/lt-glib-compile-resources: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory make[4]: *** [test_resources.c] Error 127 make[4]: Leaving directory `/home/mathmagi/src/glib-2.32.4/gio/tests' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/mathmagi/src/glib-2.32.4/gio' make[2]: *** [all] Error 2 make[2]: Leaving directory `/home/mathmagi/src/glib-2.32.4/gio' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/mathmagi/src/glib-2.32.4' make: *** [all] Error 2 It's definitely in /usr/local/lib! bash-4.1# updatedb bash-4.1# locate libffi.so.6 /usr/local/lib/libffi.so.6 /usr/local/lib/libffi.so.6.0.0 /home/mathmagi/src/libffi-3.0.11/x86_64-unknown-linux-gnu/.libs/libffi.so.6 /home/mathmagi/src/libffi-3.0.11/x86_64-unknown-linux-gnu/.libs/libffi.so.6.0.0 With glib I've tried LDFLAGS=-L/usr/local/lib ./configure Doesn't work. How do I find where glib is looking and change it?

    Read the article

  • SQL Server: How to shrink FileStream files?

    - by J4N
    For a project, I'm using a SQL Server 2008 R2. One table has a filestream column. I've made some load tests, and now the database has ~20GB used. I've empty tables, except several(configuration tables). But my database was still using a lot of space. So I used the Task -> Shrink -> Database / Files But my database is still using something like 16GB. I found that it's the filestream file is still using a lot of space. The problem is that I need to backup this database to export it on the final production server, and event if I indicate to compress the backup I got a file more than 3.5Go. Not convenient to store and upload. And I'm planning much bigger test, so I want to know how to shrink that empty space. When I'm trying: I get this exception: The properties SIZE, MAXSIZE, or FILEGROWTH cannot be specified for the FILESTREAM data file 'FileStreamFile'. (Microsoft SQL Server, Error: 5509) So what should I do? I found several topics with this error but they was about removing the filestream column.

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • How do you install/configure JBoss on Linux/Unix?

    - by mafro
    I'm currently working on how install and configure multiple (30+) JBoss EAP 5 configurations (both standalone and clusters) for development, test and production at a client's site (running SuSE). I'm not to fancy about the jboss way of storing application/configuration together with system files, so I have tried to split things up (ie moving server config out of the jboss installation directory). I also would like minimize the amount of configuration needed when upgrading/patching jboss - but I'm not done thinking about that... It would be great to hear how you've done and what you think about my approach. This is how my installations look like (for the moment): Standard JBoss EAP install (minus server configs): /opt/jboss/jboss-eap-5.0/jboss-as /opt/jboss/jboss-eap-5.0/jboss-as/bin/ /opt/jboss/jboss-eap-5.0/jboss-as/lib/ /opt/jboss/jboss-eap-5.0/jboss-as/server/ [server configs removed to avoid starting them by mistake] /opt/jboss/jboss-eap-5.0/jboss-as/.../ Application (some jboss folders has been omitted - you'll get the point anyway): /app/<project>/ [$app.dir - application specific base folder] /app/<project>/jboss/ [$jboss.home] /app/<project>/jboss/bin/ -> /opt/jboss/jboss-eap-5.0/jboss-as/bin /app/<project>/jboss/lib/ -> /opt/jboss/jboss-eap-5.0/jboss-as/lib /app/<project>/jboss/server/<cfg>/ [project specific config based on 'production'] /app/<project>/jboss/server/<cfg>/log/ -> /log/<project>/<cfg> /app/<project>/jboss/server/<cfg>/... /app/<project>/jboss/.../ -> /opt/jboss/jboss-eap-5.0/jboss-as/.../ /app/<project>/bin/ [application specific scripts for start/stop etc - wraps jboss supplied scripts] /app/<project>/deploy/ [application deploy folder] /app/<project>/etc/ [application specific config] Questions: How do you install JBoss (on linux/unix systems)? Where do you put JBoss and what modifications do you do? Where do you put your applications and application specific files? Do you share JBoss instances between applications or run one instance/cluster per application? How do you manage configuration changes (i.e. your modifications of jboss standard config)?

    Read the article

  • configure a Cisco ASA to use MS-CHAP v2 for RADIUS authentication

    - by DrStalker
    Cisco ASA5505 8.2(2) Windows 2003 AD server We want to configure our ASA (10.1.1.1) to authenticate remote VPN users through RADIUS on the Windows AD controller (10.1.1.200) We have the following entry on the ASA: aaa-server SYSCON-RADIUS protocol radius aaa-server SYSCON-RADIUS (inside) host 10.1.1.200 key ***** radius-common-pw ***** When I test a login using the account COMPANY\username I see the users credentials are correct in the security log, but I get the following in the windows system logs: User COMPANY\myusername was denied access. Fully-Qualified-User-Name = company.com/CorpUsers/AU/My Name NAS-IP-Address = 10.1.1.1 NAS-Identifier = <not present> Called-Station-Identifier = <not present> Calling-Station-Identifier = <not present> Client-Friendly-Name = ASA5510 Client-IP-Address = 10.1.1.1 NAS-Port-Type = Virtual NAS-Port = 7 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = <undetermined> Policy-Name = VPN Authentication Authentication-Type = PAP EAP-Type = <undetermined> Reason-Code = 66 Reason = The user attempted to use an authentication method that is not enabled on the matching remote access policy. My assumption is that the ASA is using PAP authentication, instead of MS-CHAP v2; the credentials are confirmed, the proper Remote Access Policy is being used, but this policy is set to only allow MS-CHAP2. What do we need to do on the ASA to make it us MS-CHAP v2? In the ADSM GUI The "Microsoft CHAP v2 compatible" tickbox is enabled, but I don't know what this corresponds to in the config.

    Read the article

  • HD Tune warning for "Reallocated Event Count" with a new/unused drive. How serious is that?

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

< Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >