Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 856/1026 | < Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >

  • Architecture or Pattern for handling properties with custom setter/getter?

    - by Shelby115
    Current Situation: I'm doing a simple MVC site for keeping journals as a personal project. My concern is I'm trying to keep the interaction between the pages and the classes simplistic. Where I run into issues is the password field. My setter encrypts the password, so the getter retrieves the encrypted password. public class JournalBook { private IEncryptor _encryptor { get; set; } private String _password { get; set; } public Int32 id { get; set; } public String name { get; set; } public String description { get; set; } public String password { get { return this._password; } set { this.setPassword(this._password, value, value); } } public List<Journal> journals { get; set; } public DateTime created { get; set; } public DateTime lastModified { get; set; } public Boolean passwordProtected { get { return this.password != null && this.password != String.Empty; } } ... } I'm currently using model-binding to submit changes or create new JournalBooks (like below). The problem arises that in the code below book.password is always null, I'm pretty sure this is because of the custom setter. [HttpPost] public ActionResult Create(JournalBook book) { // Create the JournalBook if not null. if (book != null) this.JournalBooks.Add(book); return RedirectToAction("Index"); } Question(s): Should I be handling this not in the property's getter/setter? Is there a pattern or architecture that allows for model-binding or another simple method when properties need to have custom getters/setters to manipulate the data? To summarize, how can I handle the password storing with encryption such that I have the following, Robust architecture I don't store the password as plaintext. Submitting a new or modified JournalBook is as easy as default model-binding (or close to it).

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • 502: proxy: pass request body failed

    - by Apikot
    Sometimes I get the following error (in apache's error.log) when viewing my site over https: (502)Unknown error 502: proxy: pass request body failed to xxx.xxx.xxx.xxx:443 I'm not entirely sure what this is and why it happens, it's also not consistent. The request route is: Browser Proxy server (apache with mod_proxy + mod_ssl) Load balancer (aws) Web server (apache with mod_ssl) The configuration on the proxy server is as follows: <VirtualHost *:443> ProxyRequests Off ProxyVia On ServerName www.xxx.co.uk ServerAlias xxx.co.uk <Directory proxy:*> Order deny,allow Allow from all </Directory> <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyPass / balancer://cluster:443/ lbmethod=byrequests ProxyPassReverse / balancer://cluster:443/ ProxyPreserveHost off SSLProxyEngine On SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.cert SSLCertificateKeyFile /var/www/vhosts/xxx/ssl/www.xxx.co.uk.key <Proxy balancer://cluster> BalancerMember https://xxx.eu-west-1.elb.amazonaws.com </Proxy> </VirtualHost> Any idea what the issue might be?

    Read the article

  • What exactly is a X-YMailISG header?

    - by iainH
    Finally ... our emails are being seen by Yahoo! not as junk anymore. Hurray! However I notice that the Yahoo! receiving MTA adds in a X-YMailISG header. It's very large ... 2**10 bits? Now that I've invested too large a chunk of my waking life in crafting our email headers I'm curious to know what an X-YMailISG header is. Can anybody tell me? Does it pose any security / authenticity issues? There's very little intelligible from Google results. Background: After many days tweaking TXT records in our domain's DNS zone file for SPF and DKIM, I have at last succeeded in generating email from our Drupal site that Yahoo! no longer marks as X-YahooFilteredBulk and the excellent service [email protected] returns results that show the emails passing SPF, DKIM and Sender-ID checks and appearing to SpamAssassin as ham. Yahoo! even adds a Received-SPF: pass header. Useful links: http://www.goldfisch.at/knowwiki/howtos/dkim-filter http://old.openspf.org/wizard.html Strangely enough the SPF TXT record needed / allowed a blank key / name field in our registrar's DNS management panel whereas the DKIM record needed the {selector}._domainkey as the key /name of the DKIM strings.

    Read the article

  • error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)

    - by ArunS
    Hello there, We have online shopping site. When I am going to checkout page i am getting a error like this "error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure(35)" From the apache error log i can see some attempts to connect to api.paypal.com. Here is the part of my apache error log About to connect() to api.paypal.com port 443 (#0) Trying 66.211.168.123... * connected Connected to api.paypal.com (66.211.168.123) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Closing connection #0 When i tried to connect to api.paypal.com using curl i am getting a error like this curl -iv https://api.paypal.com/ * About to connect() to api.paypal.com port 443 (#0) * Trying 66.211.168.91... connected * Connected to api.paypal.com (66.211.168.91) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS alert, Server hello (2): * error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure Can anyone help me to figure out this. Thanks in Advance. Arun S

    Read the article

  • Watchguard SSL Certificate problems

    - by Bill Best
    We recently purchased a Watchguard XTM 510. The hope is to replace our ISA 2006 proxy with this UTM product. We are having some issues with secured sites in our test setup. Currently We are still running traffic through the ISA server and I have the Watchguard also setup to be connected to the network. Where we run into problems is when I set in ISA the HTTPS site's location to be forwarded through the XTM, I get a certificate could not be validated error. Therefore I think Ive narrowed it down to two possibilities. One, the certificate needs to be installed on the XTM. Im not 100% sure this is the case as I believe this should just be acting as strictly a proxy and forwarding all the traffic through no questions asked. Either way if I try to import a certificate to the XTM I always get a certificate validation failed error message. These are generally converted pfx to pem files. Second, the XTM CA certificate needs to be installed on the ISA server so that they may communicate. I have done this but it didn't seem to do anything. I believe this should be working and was hoping someone has struggled through this before.

    Read the article

  • Cannot create a project TFS 2012 - TF218027

    - by GrandMasterFlush
    I've just installed TFS2012 and am trying to create a new project in the default collection via Visual Studio 2012 but I keep getting this error message: TF218027: The following reporting folder could not be created on the server that is running SQL Server Reporting Services: /TfsReports/DefaultCollection. The report server is located at: http://<servername>/Reports. The error is: The permissions granted to user '<domain>/grandmasterflush' are insufficient for performing this operation.. Verify that the path is correct and that you have sufficient permissions to create the folder on that server and then try again. I've checked the permissions and my user is a member of the Project Collection Administrators and the Project Collection Administrators group has the 'Create new project' permission set to allow. The only thing I think it might be is that the user that I created during installation for the Sharepoint access and reports viewing does not have permission to write to the reports folder, however if I select "Do not configure a SharePoint site at this time" then I still get the error messages. I can't find the reports folder to check the permissions either. TFS is using an instance of SQL 2012 that was already on the machine when TFS was installed. Can anyone see what I'm doing wrong please?

    Read the article

  • SD Card reader not working on Sony Vaio

    - by TessellatingHeckler
    This laptop (Sony Vaio VGN-Z31MN/B PCG-6z2m) has been installed with Windows 7 64 bit, all the drivers from Sony's VAIO site are installed, and everything in Device Manager both (a) has a driver and (b) shows as working, no exclamation marks or warnings. "Hide empty drives" in Folder options is disabled so the card reader appears, but will not read the card ("please insert a disk in drive O:"). Previously, when the laptop had Windows XP on it, it could read the same card. Also, Windows update suggested driver ("SD Card Reader") doesn't work, Ricoh own drivers install properly but do the same behaviour. Other 3rd party driver suggestions from forums (Acer and Texas-Instruments FlashMedia) do not seem to install properly. I would post the PCI id if I had it, but it was just showing up as rimsptsk\diskricohmemorystickstorage (while it had the Ricoh Driver installed). Edit: If there are any lower level diagnostic utlities which might shed more light on it I'd welcome hearing of them. Anything which might show get it to put troubleshooting logs in the event log or identify chipsets or whatever... Update: Device details are: SD\VID_03&OID_5344&PID_SD04G&REV_8.0\5&4617BC3&0&0 : SD Memory Card PCI\VEN_8086&DEV_2934&SUBSYS_9025104D&REV_03\3&21436425&0&E8: Intel(R) ICH9 Family USB Universal Host Controller - 2934 PCI\VEN_1180&DEV_0476&SUBSYS_9025104D&REV_BA\4&1BD7BFCD&0&20F0: Ricoh R/RL/5C476(II) or Compatible CardBus Controller RIMSPTSK\DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00\MS0001: SD Storage Card PCI\VEN_1180&DEV_0592&SUBSYS_9025104D&REV_11\4&1BD7BFCD&0&24F0: Ricoh Memory Stick Host Controller WPDBUSENUMROOT\UMB\2&37C186B&1&STORAGE#VOLUME#_??_RIMSPTSK#DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00#MS0001#: O:\ STORAGE\VOLUME\{C82A81B8-5A4F-11E0-AACC-806E6F6E6963}#0000000000100000: Generic volume PCI\VEN_1180&DEV_0822&SUBSYS_9025104D&REV_21\4&1BD7BFCD&0&22F0: SDA Standard Compliant SD Host Controller ROOT\LEGACY_FVEVOL\0000 : Bitlocker Drive Encryption Filter Driver PCI\VEN_1180&DEV_0832&SUBSYS_9025104D&REV_04\4&1BD7BFCD&0&21F0: Ricoh 1394 OHCI Compliant Host Controller Now going to search for drivers for that.

    Read the article

  • How to disable monitor auto detection in Windows 7?

    - by Jay Yother
    I am currently running Windows 7 Ultimate 64-bit with a dual monitor setup with an NVIDIA 7950 GT graphics card. One monitor is dedicated to this machine and the other monitor is connected to a DVI KVM switch. When I switch to my other computer, Windows 7 disables the monitor. However, when I switch back it does not re-enable the monitor. The only circumstance that automatically re-enables the second monitor is when I switch back after Windows has put the monitors into power save mode. I am continually having to bring up the NVIDIA control panel to have it re-enable the monitor. Under Windows XP I would just disable the NVIDIA service to prevent it from auto-detecting the monitor (which doesn't solve the problem under Win7), and in Vista there was a registry hack that would prevent this. It looks as though that has been removed in Windows 7. I have found similar questions posted on this site, but nothing that matches my problem exactly. The following link is the question that comes the closest, but does not provide a solution to the problem. http://superuser.com/questions/96683/how-to-fix-monitor-detection-on-windows-7 Is there a way in Windows 7 to disable monitor auto-detection?

    Read the article

  • Setting up PerformancePoint Services on Sharepoint 2010: connection errors

    - by Rik
    I have tried to setup PerformancePoint Services on SharePoint 2010, but every time I try to use the dashboard designer, I get this error: “An error has occurred attempting to contact the specified SharePoint site” I have tried these steps but it hasn't helped. Any ideas? The event log gives the following information: WebHost failed to process a request. Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/24724999 Exception: System.ServiceModel.ServiceActivationException: The service '/_vti_bin/client.svc' cannot be activated due to an exception during compilation. The exception message is: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item. --- System.ArgumentException: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item at System.ServiceModel.UriSchemeKeyedCollection.InsertItem(Int32 index, Uri item) at System.Collections.Generic.SynchronizedCollection`1.Add(T item) at System.ServiceModel.UriSchemeKeyedCollection..ctor(Uri[] addresses) at System.ServiceModel.ServiceHost..ctor(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.CreateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) --- End of inner exception stack trace --- at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) Process Name: w3wp Process ID: 2576

    Read the article

  • Supermicro BIOS recovery - SUPER.ROM

    - by Goyuix
    I have a Supermicro X9SCL+-F motherboard that I flashed a beta BIOS to, then the flash went bad when I tried to flash back to the latest stable. I am attempting to recover using their SUPER.ROM recovery from a flash drive without success. I read in the manual that if I hold down Ctrl +Home  while powering on the server I can do a BIOS recovery from a flash drive. I hold down those keys, hear the desired two beeps and I can see the activity LED on the flash drive activate. Unfortunately, instead of the monitor turning on and allowing a BIOS recovery as the manual indicates, I hear five beeps, followed shortly after by 3 beeps. I grabbed the latest BIOS from their site (x9scm2.508.zip) and extracted it to my flash drive and renamed it to SUPER.ROM. Their instructions are not clear if any ROM can serve as the SUPER.ROM file, or if I need a special SUPER.ROM file to initiate the recovery at which time I can supply a known good ROM. Does anyone have any expertise in ROM recovery for Supermicro boards? Am I missing some key step? Can any known good ROM file function as the SUPER.ROM file for recovery?

    Read the article

  • Port forwarding on Fortigate 50B

    - by sindre j
    I have serious problems setting up port forwarding on a Fortigate 50B. The unit is basically running as factory default, the wan1 interface is connected to my fibre optic internet modem, and my lan is connected to the internal switch of the Fortigate. The factory default firewall policy allowing traffic from the internal interface to wan1 is kept and I'm able to access the interet as normal. Then I added a virtual ip and a firewall policy for allowing access from the internet to my local servers (ip 192.168.9.51) webserver (standard port 80). The settings I made are as follows. Edit Virtual IP Mapping Name : Server VIP External interface : wan1 Type : Static NAT Extermal IP Address/Range : 0.0.0.0 Mapped IP Address/Range : 192.168.9.51 Port Forwading : not checked Firewall policy Source interface/Zone : wan1 Source address : all Destination interface/Zone : internal Destination address : Server VIP Schedule : always Service : HTTP Action : ACCEPT no other settings checked What happens now is that I'm unable to access internet from my server, I'm not getting through to the webserver from internet either. I'm able to ping a site on the outside, but all web traffic is blocked, both ways. I've checked the documentation, but as far as I can tell I have set this up correctly. Anyone here with knowledge of Fortigate port forwading/NAT?

    Read the article

  • How to solve "Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\\... " Error?

    - by Kiran Rs
    I have a contact page where users can contact me via that form. But I'm getting this error, Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\INETPUB\VHOSTS\nextoption.in\httpdocs\auto-replay\contact.php on line 33 My Php code is, if(isset($_POST['send'])) //if "email" is filled out, send email { //send email $email1=$_POST['email']; $headers = "From: My site\r\n"; $headers .= "Reply-To: [email protected]\r\n"; $headers .= "Return-Path: [email protected]\r\n"; $headers .= "X-Mailer: Drupal\n"; $headers .= 'MIME-Version: 1.0' . "\n"; $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $to = "[email protected]"; $subject = "Test mail"; $message = "Hello!This is a simple email message $from = $email1; mail($to,$subject,$message,$headers); ? alert ("Enquiry form submited successfully ! We'll get back you soon "); What will be my fault..... What is the Fault in SMTP Server?

    Read the article

  • Mounting an Azure blob container in a Linux VM Role

    - by djechelon
    I previously asked a question about this topic but now I prefer to rewrite it from scratch because I was very confused back then. I currently have a Linux XS VM Role in Azure. I basically want to create a self-managed and evoluted hosting service using VMs rather than Azure's more-expensive Web Roles. I also want to take advantage of load balancing (between VM Roles) and geo-replication (of Storage Roles), making sure that the "web files" of customers are located in a defined and manageable place. One way I found to "mount" a drive in Linux VM is described here and involves mounting a VHD onto the virtual machine. From what I could learn, the VHD is reliably-stored in a storage role, and is exclusively locked by the VM that uses it. Once the VM Role has its drive I can format the partition to any size I want. I don't want that!! I would like each hosted site to have its own blob directory, then each replicated/load-balanced VM Role to rw mount like in NFS that blob directory to read HTML and script files. The database is obviously courtesy of Microsoft :) My question is Is it possible to actually mount a blob storage into a directory in the Linux FS? Is it possible in Windows Server 2008?

    Read the article

  • Configure IIS7.5 to allow calls to asmx web services.

    - by goodeye
    Hi, I migrated a site from IIS6 to Windows Server 2008 R2 IIS7.5. It has an asmx web service, which is working fine locally, but returns this 500 error when called from another machine: Request format is unrecognized for URL unexpectedly ending in /myMethodName The solution in previous versions is to add this to the web.config for the protocols needed (typically omitting HttpGet for production): <system.web> <webServices> <protocols> <add name="HttpGet" /> <add name="HttpPost" /> <add name="HttpSoap" /> </protocols> </webServices> </system.web> This is posted everywhere, including http://stackoverflow.com/questions/657313/request-format-is-unrecognized-for-url-unexpectedly-ending-in For IIS7.5, this throws a configuration error; I understand this section doesn't belong, but tried it anyway. I also boiled down the asmx call to a simple hello world. I tested with POST also, just to eliminate any issues with GET. What is the equivalent for IIS7.5? - either web.config format or the UI button to push would be really helpful. Thanks, Bob

    Read the article

  • procdump on w3wp.exe: Only part of a ReadProcessMemory or WriteProcessMemory request was completed

    - by JakeS
    I'm having a problem with an IIS application that occasionally spikes up in CPU usage, and am trying to use procdump to get a memory dump for examination. I'm running "procdump.exe -64 -mA 9999" where 9999 is the pid of the process. But every time I do it, I get an error: Only part of a ReadProcessMemory or WriteProcessMemory request was completed. Doing this also recycles the apppool, relieving the CPU spike, so I can't keep trying until I get it right. Does anyone know what is going wrong? EDIT WITH MORE INFO: So far I've failed to generate a debug dump no matter what tool I try. All of them seem to generate the same sort of error. This is 2008 R2 Datacenter running IIS7 with a 64-bit asp.net web site. My best guess is that something is getting blocked, causing some requests to remain open in IIS and gradually using up resources. If I monitor the worker process using the IIS Manager and view all requests, throughout the day I'll start to see some requests that "stick" and run forever. Some of these are for static files. Some are for aspx pages. I cannot see any "common" reason for them. Every once in a while the app pool starts taking up 100% CPU and the only remedy is to kill it.

    Read the article

  • Updating Windows DNS records from a remote windows DNS server

    - by Luckyboy
    Does anyone know if it is possible for a windows 2003 DNS server to update the records for a domain so that it contains all the records of a domain of of a remotely based DNS server? Im almost certain that doesn't quite explain the problem so I shall illustrate with an example: We have two offices, both are based about 100 miles apart. One deals with IT (Intranet development etc.) while the other is a call centre that uses the Intranet systems. Currently each office has its own DNS server, with the IT office's and call centre's DNS servers containing entries for intranet site. The difference is that the IT DNS server records point to the various servers that host the Intranet sites (e.g. intranetsite1 - 192.168.1.10, intranetsite2 - 192.168.1.11) while all of the entries in the call centre's DNS point to the IT office's DNS server (intranetsite1 - [it office ip address], intranetsite2 - [it office ip address]). Is there any way that the call centre's DNS server could automatically add all DNS records hosted by the IT office's DNS, translating the IP addresses to the IP address of the IT office?

    Read the article

  • Problem with Email Notifications in VisualSVN Server

    - by emzero
    Hey guys! I have a dedicated server running windows 2003 server and Visual SVN Server 2.0.8. I'm trying to configure it to send email notifications on commit. So I found this article on Visual SVN site. It says I have to edit the Post-commit hook and set it to the following: "%VISUALSVN_SERVER%\bin\VisualSVNServerHooks.exe" ^ commit-notification "%1" -r %2 ^ --from <from-email> --to <to-email> ^ --smtp-server <smtp-server> Of course I've replaced the variables there. The problem is when someone commits something, the svn client throws the following error: post-commit hook failed (exit code 1) with no output. The commit process runs with no problems, I mean it does commit the files. But it won't send any email notification. If I remove the post-commit hook, then I don't get the error (and of course I don't get any notification). Could you help me out with it? The error doesn't tell too much =S Thank you!

    Read the article

  • HP DL160G6 bios update fails

    - by Bojo
    I tried to update the BIOS from my HP DL160G6's. Unfortunately the Windows update degraded my BIOS version from 243 to 237. But when I try to upgrade both servers to version the update fails. First I got a warning like: CMOS Layout difference between System ROM and ROM file has detected. AFU recommand (sic) adding /C commands of your original input commands. Press "A" to accept AFU's recommendation. Press "F" to keep original input commands. And the update did not do anything. But now I get a message like: Reading flash ........ done Bootblock checksum ... ok Module checksums ..... bad Error: BIOS checksum error And the update stops. I tried some commands form this page: http://www.ami.com/support/downloads/txt/AFU_README.TXT But I don't try to much, the servers are still booting. Does anyone know how to update my servers to BIOS version 245? I used this version http://h20565.www2.hp.com/portal/site/hpsc/template.PAGE/public/psi/swdDetails/?sp4ts.oid=3884344&spf_p.tpst=swdMain&spf_p.prp_swdMain=wsrp-navigationalState%3Didx%253D%257CswItem%253DMTX_7bd12651ab954fdcb0d7ee164a%257CswEnvOID%253D54%257CitemLocale%253D%257CswLang%253D%257Cmode%253D%257Caction%253DdriverDocument&javax.portlet.begCacheTok=com.vignette.cachetoken&javax.portlet.endCacheTok=com.vignette.cachetoken and created a bootable USB stick sith HPQUSB.exe

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

  • Mod Rewrite Help - Pseudo-Subdirectories

    - by Gimpyfuzznut
    I am dealing with a frustrating problem with Joomla that is going to require some url trickery. The idea is straight-forward but after reading a bunch of guides for mod-rewrite, I still can't seem to get it work. Let's say my site is www.mysite.com. Joomla is already performing some rewriting for SEF urls so I have links like www.mysite.com/home and www.mysite.com/news and so on. I want to be able to have (4) pseudo-subdirectories like www.mysite.com/mode1/ and www.mysite.com/mode2/ and so on. These subdirectories should work as if the subdirectory isn't there, ie both www.mysite.com/mode1/home and www.mysite.com/mode2/home should pull up the same www.mysite.com/home. It should point any www.mysite.com/mode1/anypagehere to www.mysite.com/anypagehere. The reason I am asking for this is because I will be reading the url for mode1, mode2, etc, to modify the template page. There will be a landing page that will direct people to /mode1/ and /mode2/ etc and the template will change based on that. Note, that I don't want to actually pass a parameter to the url accessible by a GET or whatever because Joomla removes it (perhaps because of my current mod_rewrite settings). I've pasted the current .htaccess file. RewriteBase /joomla ##########Rewrite rules to block out some common exploits RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] ########## Begin - Joomla! core SEF Section RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php #RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] ########## End - Joomla! core SEF Section

    Read the article

  • WebSphere MQ running under local account / group cannot read group memberships for Active Directory user. Workaround or alternative resolution?

    - by noahz
    I am developing an application that is using WebSphere MQ v6.0. WebSphere MQ is currently not working due to the following issue: WebSphere MQ service runs under local user "MUSR_MQADMIN" in the local group "mqm" I attempt to use the service using my own account, BIZ\noahz MUSR_MQADMIN needs to check if BIZ\noahz is in local group "mqm" MUSR_MQADMIN does not have permission to read the Active Directory group membership of BIZ\noahz The following error appears in the MQ log file: ----- amqzfubn.c : 3582 ------------------------------------------------------- 1/31/2011 18:51:32 - Process(704.1105) User(MUSR_MQADMIN) Program(amqzlaa0.exe) AMQ8079: Access was denied when attempting to retrieve group membership information for user 'noahz@biz'. EXPLANATION: WebSphere MQ, running with the authority of user 'musr_mqadmin@noahz-biz', was unable to retrieve group membership information for the specified user. ACTION: Ensure Active Directory access permissions allow user 'musr_mqadmin@noahz-biz' to read group memberships for user 'noahz@biz'. To retrieve group membership information for a domain user, MQ must run with the authority of a domain user. ----- amqzfubn.c : 3582 ------------------------------------------------------- I found more information is here on IBM's web site: http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=/com.ibm.mq.amqtac.doc/wq10830_.htm I don't have Active Directory admin rights for my Windows machine, so my question is: Is there anything else I can do to resolve (or work-around) this issue and get WebSphere MQ working for me again? For example, can I disable this security check in WebSphere MQ?

    Read the article

  • How do you install/configure JBoss on Linux/Unix?

    - by mafro
    I'm currently working on how install and configure multiple (30+) JBoss EAP 5 configurations (both standalone and clusters) for development, test and production at a client's site (running SuSE). I'm not to fancy about the jboss way of storing application/configuration together with system files, so I have tried to split things up (ie moving server config out of the jboss installation directory). I also would like minimize the amount of configuration needed when upgrading/patching jboss - but I'm not done thinking about that... It would be great to hear how you've done and what you think about my approach. This is how my installations look like (for the moment): Standard JBoss EAP install (minus server configs): /opt/jboss/jboss-eap-5.0/jboss-as /opt/jboss/jboss-eap-5.0/jboss-as/bin/ /opt/jboss/jboss-eap-5.0/jboss-as/lib/ /opt/jboss/jboss-eap-5.0/jboss-as/server/ [server configs removed to avoid starting them by mistake] /opt/jboss/jboss-eap-5.0/jboss-as/.../ Application (some jboss folders has been omitted - you'll get the point anyway): /app/<project>/ [$app.dir - application specific base folder] /app/<project>/jboss/ [$jboss.home] /app/<project>/jboss/bin/ -> /opt/jboss/jboss-eap-5.0/jboss-as/bin /app/<project>/jboss/lib/ -> /opt/jboss/jboss-eap-5.0/jboss-as/lib /app/<project>/jboss/server/<cfg>/ [project specific config based on 'production'] /app/<project>/jboss/server/<cfg>/log/ -> /log/<project>/<cfg> /app/<project>/jboss/server/<cfg>/... /app/<project>/jboss/.../ -> /opt/jboss/jboss-eap-5.0/jboss-as/.../ /app/<project>/bin/ [application specific scripts for start/stop etc - wraps jboss supplied scripts] /app/<project>/deploy/ [application deploy folder] /app/<project>/etc/ [application specific config] Questions: How do you install JBoss (on linux/unix systems)? Where do you put JBoss and what modifications do you do? Where do you put your applications and application specific files? Do you share JBoss instances between applications or run one instance/cluster per application? How do you manage configuration changes (i.e. your modifications of jboss standard config)?

    Read the article

  • Unexpected start of already-primary server processes when heartbeat on secondary is stopped.

    - by vorik
    Hi, I've got an active-passive Heartbeat cluster with Apache, MySQL, ActiveMQ and DRBD. Today, I wanted to perform hardware-maintenance on the secondary node (node04), so I stopped the heartbeat service before shutting it down. Then, the primary node (node03) received a shutdown notice from the secondary node (node04). This logging comes from the primary node: node03 heartbeat[4458]: 2010/03/08_08:52:56 info: Received shutdown notice from 'node04.companydomain.nl'. heartbeat[4458]: 2010/03/08_08:52:56 info: Resources being acquired from node04.companydomain.nl. harc[27522]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/status status heartbeat[27523]: 2010/03/08_08:52:56 info: Local Resource acquisition completed. mach_down[27567]: 2010/03/08_08:52:56 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[27567]: 2010/03/08_08:52:56 info: mach_down takeover complete for node node04.companydomain.nl. heartbeat[4458]: 2010/03/08_08:52:56 info: mach_down takeover complete. harc[27620]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/ip-request-resp ip-request-resp ip-request-resp[27620]: 2010/03/08_08:52:56 received ip-request-resp drbddisk OK yes ResourceManager[27645]: 2010/03/08_08:52:56 info: Acquiring resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:56 info: Running /etc/ha.d/resource.d/drbddisk start Filesystem[27700]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/mysql start mysql[27783]: 2010/03/08_08:52:57 Starting MySQL[ OK ] apache[27853]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/monitor start monitor[28160]: 2010/03/08_08:52:58 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq start activemq[28210]: 2010/03/08_08:52:58 Starting ActiveMQ Broker... ActiveMQ Broker is already running. ResourceManager[27645]: 2010/03/08_08:52:58 ERROR: Return code 1 from /etc/ha.d/resource.d/activemq ResourceManager[27645]: 2010/03/08_08:52:58 CRIT: Giving up resources due to failure of activemq ResourceManager[27645]: 2010/03/08_08:52:58 info: Releasing resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/IPaddr 1.2.3.212 stop IPaddr[28329]: 2010/03/08_08:52:58 INFO: ifconfig eth0:0 down IPaddr[28312]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28378]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28433]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/tivoli-cluster stop ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq stop activemq[28503]: 2010/03/08_08:53:01 Stopping ActiveMQ Broker... Stopped ActiveMQ Broker. ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/monitor stop monitor[28681]: 2010/03/08_08:53:01 ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/LVSSyncDaemonSwap master stop LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster down LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncbackup up LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster released ResourceManager[27645]: 2010/03/08_08:53:02 info: Running /etc/ha.d/resource.d/apache /etc/httpd/conf/httpd.conf stop apache[28782]: 2010/03/08_08:53:03 INFO: Killing apache PID 18390 apache[28782]: 2010/03/08_08:53:03 INFO: apache stopped. apache[28771]: 2010/03/08_08:53:03 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:03 info: Running /etc/ha.d/resource.d/mysql stop mysql[28851]: 2010/03/08_08:53:24 Shutting down MySQL.....................[ OK ] ResourceManager[27645]: 2010/03/08_08:53:24 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd0 /data ext3 stop Filesystem[29010]: 2010/03/08_08:53:25 INFO: Running stop for /dev/drbd0 on /data Filesystem[29010]: 2010/03/08_08:53:25 INFO: Trying to unmount /data Filesystem[29010]: 2010/03/08_08:53:25 ERROR: Couldn't unmount /data; trying cleanup with SIGTERM Filesystem[29010]: 2010/03/08_08:53:25 INFO: Some processes on /data were signalled Filesystem[29010]: 2010/03/08_08:53:27 INFO: unmounted /data successfully Filesystem[28999]: 2010/03/08_08:53:27 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:27 info: Running /etc/ha.d/resource.d/drbddisk stop heartbeat[4458]: 2010/03/08_08:53:29 WARN: node node04.companydomain.nl: is dead heartbeat[4458]: 2010/03/08_08:53:29 info: Dead node node04.companydomain.nl gave up resources. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth0 dead. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth1 dead. hb_standby[29193]: 2010/03/08_08:53:57 Going standby [foreign]. heartbeat[4458]: 2010/03/08_08:53:57 info: node03.companydomain.nl wants to go standby [foreign] Soo... What just happened here??? Heartbeat on node04 stopped and told node03, which was the active node at the time. Somehow, node03 decided to start the cluster processes that were already running. (For the processes that are not critical, I always return a 0 from the startupscript so it does not stops the entire cluster when a non-essential part fails.) When starting ActiveMQ, it returns status 1 because it is already running. This fails the node and shuts everything down. As heartbeat is not running on the secondary node, it cannot failover to there. When I tried to run ha_takeover to restart the resources, absolutely nothing happened. Only after I restarted heartbeat on the primary node the resources could be started (after a delay of 2 minutes). These are my questions: Why does heartbeat on the primary node try to start the cluster processes again? Why did ha_takeover not work? What can I do to prevent this from happening? Server configuration: DRBD: version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by [email protected], 2010-01-20 09:14:48 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate B r---- ns:0 nr:6459432 dw:6459432 dr:0 al:0 bm:301 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 uname -a Linux node04 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64 x86_64 x86_64 GNU/Linux haresources node03.companydomain.nl \ drbddisk \ Filesystem::/dev/drbd0::/data::ext3 \ mysql \ apache::/etc/httpd/conf/httpd.conf \ LVSSyncDaemonSwap::master \ monitor \ activemq \ tivoli-cluster \ MailTo::[email protected]::DRBDFailureDrisAcc \ MailTo::[email protected]::DRBDFailureDrisAcc \ 1.2.3.212 ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log keepalive 500ms deadtime 30 warntime 10 initdead 120 udpport 694 mcast eth0 225.0.0.3 694 1 0 mcast eth1 225.0.0.4 694 1 0 auto_failback off node node03.companydomain.nl node node04.companydomain.nl respawn hacluster /usr/lib64/heartbeat/dopd apiauth dopd gid=haclient uid=hacluster Thank you very much in advance, Ger Apeldoorn

    Read the article

  • How do I upgrade django on ubuntu 9.04?

    - by Lorin Hochstein
    I've got Django 1.0.2 installed on Ubuntu 9.04. I'd like to upgrade Django, because I have an app that needs Django 1.1 or greater. I tried using pip to do the upgrade, but got the following: $ sudo pip install Django==1.1 Downloading/unpacking Django==1.1 Downloading Django-1.1.tar.gz (5.6Mb): 5.6Mb downloaded Running setup.py egg_info for package Django Installing collected packages: Django Found existing installation: Django 1.0.2-final Not uninstalling Django at /var/lib/python-support/python2.6, outside environment /usr Running setup.py install for Django changing mode of build/scripts-2.6/django-admin.py from 644 to 755 changing mode of /usr/local/bin/django-admin.py to 755 Successfully installed Django It seems like it worked, but it refuses to remove the original Django 1.02, and sure enough: $ pip freeze | grep -i django Django==1.0.2-final django-debug-toolbar==0.8.3 django-sphinx==2.2.3 $ /usr/local/bin/django-admin.py --version 1.0.2 final The problem, apparently, is that pip won't uninstall files outside of /usr. I'd like to remove the existing Django files manually, but I have no idea how to do that, because I'm unfamiliar with how Python packages are laid out in Ubuntu. It looks pretty complicated. The site-packages directory is: $ python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" /usr/lib/python2.6/dist-packages However, that's not where the django files live: $ ls -ld /usr/lib/python2.6/dist-packages/[Dd]jango* ls: cannot access /usr/lib/python2.6/dist-packages/[Dd]jango*: No such file or directory There's a /var/lib/python-support/python2.6/django directory, and the __init__.py file in that directory points to /usr/share/python-support/python-django/django/__init__.py. Clearly, pip is able to figure out where the files live. Is there any way to retrieve the list of files associated with the django package so I can just delete them manually?

    Read the article

< Previous Page | 852 853 854 855 856 857 858 859 860 861 862 863  | Next Page >