Search Results

Search found 6427 results on 258 pages for 'customer protection'.

Page 79/258 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Help yourself . if you like

    - by rachelp
    At Red Gate we enjoy talking to our customers. Really! If you've read recent blog posts by members of some of our customer-facing teams, you'll have spotted the pleasure they take in their work. In case you missed those posts, here they are: From our Finance team: Finance: Friends, not foes! From our reception desk: The Front line of Communication However, we recognise that sometimes our customers would like to be able to solve their problems or answer their questions without talking to us - they're in a hurry, it's outside office hours . or perhaps they just prefer not to pick up the phone and call.   Self-service customer care So we've begun a programme of work to enable more self-service; whether it's finding the answer to a "how do i.?" question or getting access to a record of what product licenses they own, we want to make it much easier for our customers to get hold of this information for themselves. If they want to.   Phase 1: make it easier to find information We decided to start by tackling findability. We've got loads of useful information on our website, but it's sometimes difficult to find, so we've been working on improving our site search. Step 1 has been to replace the search engine, clean up the search UI, and make it consistent across the site. We're nearly there! The idea is that if we improve the site search it will be easier - and much more pleasant - for people to find the information they need. The new search will go live some time in April, and then we'll be gathering feedback, looking at web analytics (more about this in an earlier article), and working out what improvements we still need to make. We'd love to hear what you think, so do give your feedback or drop us a line. Or pick up the phone and call, if you like.   What do you think? While I've got your attention, I'd love to hear what people think about self-service customer care. Do you like to call, email, live chat . or do you prefer to dig around and find out answers yourself? Who's getting it right: what self-service sites do you like? p.s. Watch this space for news of phase 2.

    Read the article

  • How can I defend Ruby on Rails against customers' not technical opinion?

    - by okeen
    My customer, a translations business owner, just told me that he has been reading about Ruby on Rails and told me that "there are more PHP guys around there" and "it seems the community prefers it". What would you, as software engineer and freelancer, say to the customer to achieve these goals: Sell Make him see that the technology is my expert decision and Rails is as good or better than PHP (+ whatever framework) for this particular project. UPDATE: Thank you all for the suggestions! Tomorrow I've got another meeting with him, let's see how it goes, I will update again :) UPDATE 2: Finally I told him to read this thread and the result has been fantastic: He gave me the project and we are going to start right now. Thank you all for the help, you have free beer in my charge if we see someday :) BTW: I learned the lesson: be as transparent as possible, because if you believe in yourself and your work, there is no question compromising enough to beat you. regards

    Read the article

  • Top ten things that don't make sense in The Walking Dead

    - by iamjames
    For those of you that don't know, The Walking Dead is a popular American TV show on AMC about a group of people trying to survive in a zombie-filled world.Here's the top ten eleven things that don't make sense on the show (and have never been explained) 1)  They never visit stores.  No Walmarts, Kmarts, Targets, shopping malls, pawn shops, gas stations, etc.  You'd think that would be the first place you'd visit for supplies, but they never have.  Not once.  There was a tiny corner store they visited in a small town, and while many products were already gone they did find several useful items.  2)  They never raid houses.  Why not?  One would imagine that they would want to search houses for useful items, but they don't.3)  They don't use 2 way radios.  Modern 2-way radios have a 36-mile range.  That's probably best possible range, but even if the range is only 10% of that, 3.6 miles, that's still more than enough for most situations, for the occasional "hey zombies attacking can you give me a hand?" or "there's zombies walking by stay inside until they leave" or "remember to pick up milk at the store love mom".  And yes they would need batteries or recharging, but they have been using gas-powered generators on the show and I'm sure a car charger would work.4)  They use gas-guzzling vehicles.  Every vehicle they have is from the 80s or 90s except for the new Kia SUV there for product placement.  Why?  They should all be driving new small SUVs or hybrids.  Visit a dealership and steal more fuel-efficient vehicles, because while the Walmart's might be empty from people raiding them for supplies, I'm sure most people weren't thinking "Gee, I should go car shopping" when the infection hit5)  They drive a motorcycle.  Seriously?  Let's find the least protective vehicle and drive that.  And while motorcycles get reasonable gas mileage, 5 people in a SUV gets better gas mileage per person than 5 people all driving motorcycles so it doesn't make economical sense either.6)  They drive loud vehicles.  The motorcycle used is commonly referred to as a chopper and is about as loud as a motorcycle can get.  The zombies are attracted to loud noise, so wouldn't it make more sense to drive vehicles that makes less sound?  Because as soon as you stop the bike and get off you're surrounded by zombies that heard you coming.  And it's not just the bike, the ~1980s Chevy SUV in the show is also very loud.7)  They never run out of food.  Seems like that would be a almost daily struggle, keeping enough food available for about a dozen people, yet I've never seen them visit a grocery store or local convenience store to stock up.8)  They don't carry swords, machetes, clubs, etc.  Let's face it, biting is not a very effective means of attack.  It's good for animals because they have fangs and little else, but humans have been finding better ways of killing each other since forever.  So why doesn't everyone on the show carry a sword or machete or at least a baseball bat?  Anything is better than wasting valuable bullets all the time.  Sure, dozen zombies approaching?  Shoot them.  One zombie approaching?  Save the bullet, cut off it's head.  9)  They do not wear protective clothing.  Human teeth are not exactly the sharpest teeth in the animal kingdom.  The leather shoes your dog ripped to shreds within minutes would probably take you days to bite through.  So why do they walk around half-naked?  Yes I know it's hot in Atlanta, but you'd think they'd at least have some tough leather coats or something for protection.  Maybe put a few small vent holes in the fabric if it's really hot.  Or better:  make your own chainmail.  Chainmail was used for thousands of years for protection from swords and is still used by scuba divers for protection from sharks.  If swords and sharks can't puncture it, human teeth don't stand a chance.  10)  They don't build barricades or dig trenches around properties.  In Season 2 they stayed at a farm in the middle of no where.  While being far away from people is a great way to stay far away from zombies, it would still make sense to build some sort of defenses.  Hordes of zombies would knock down almost any fence, but what about a trench or moat?  Maybe something not too wide so it can be jumped over easily but a zombie would fall into because I haven't seen too many jumping zombies on the show.  11)  They don't live in a mall or tall office building.  A mall would be perfect.  They have large security gates designed to keep even hundreds of people from breaking in and offer lots of supplies and food.  They're usually hundreds of thousands of square feet and fully enclosed, one could probably live their entire life happily in a mall.  Tall office building with on-site cafeteria would be another good choice.  They also usually offer good security and office furniture could be pushed out of the windows to crush approaching zombies, and the cafeteria is usually stocked to provide food for hundreds or thousands of office workers so food wouldn't be a problem for a long time. So there you have it, eleven things that don't make sense in The Walking Dead.  Have any of your own you'd like to add or were one of these things covered in the show?  Let me know in the comments.

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • SQL Server slow in production environment

    - by Lieven Cardoen
    I have a weird problem in a customer's production environment. I can't give any details on the infrastructure, except that SQL server runs on a virtual server. The data, log and filestream file are on another storage server (data and filestream together and log on a separate server). In our local Test environment, there's one particular query that executes with these durations: first we clear the cache 300ms (First time it takes longer, but from then on it's cached.) 20ms 15ms 17ms In the customer's production environment, the SQL Server is more powerful, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms 2600ms 2400ms The servers in the customer's production environment are more powerful but they do have virtual servers (we don't). What could be the cause... Not enough memory? Fragmentation? Physical storage? How would you tackle this performance problem? EDIT: Some people have asked me if the data set is equal and it is. I restored their database on our environment. It's true that this was the first thing I looked at. (@Everyone: I added the edit because it will be the first thing that many will think off).

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • route propogation using OSPF in a network

    - by liv2hak
    I am using Juniper J-series routers to emulate a small telco and VPN customer.The internal routing will be configured with OSPF,MPLS including a default and backup path,RSVP for distributing labels withing the telco,OSPF for distributing routes from the customer edge (CE) routers to the VRF's in the adjacent PE's and finally iBGP for distributing customer routes between VRF's in different PEs. The topology of the network is shown below. The Addressing scheme for the network is as follows. UOW-TAU ******* ge-0/0/0 192.168.3.1 TAU-PE1 ******* ge-0/0/0 10.0.1.0 ge-0/0/1 10.0.2.0 ge-0/0/2 192.168.3.2 TAU-P1 ****** ge-0/0/0 172.16.1.0 ge-0/0/1 172.16.3.1 ge-0/0/2 10.0.2.2 HAM-P1 ****** ge-0/0/0 172.16.3.2 ge-0/0/1 172.16.2.1 ge-0/0/3 10.0.3.2 ACK-P1 ****** ge-0/0/0 172.16.1.2 ge-0/0/2 172.16.2.2 ge-0/0/3 10.0.1.2 HAM-PE1 ******* ge-0/0/0 10.0.3.1 ge-0/0/2 192.168.4.2 UOW-HAM ******* ge-0/0/0 192.168.4.1 I also set up loopback address for each node. I want to setup OSPF so that path to each internal subnet and router loopback address is propogated to all PE and P nodes.I also want to select a single area for PE and P nodes,and on each node I should add each interface that should be propogated. How do I accomplish this.? With my understanding below is the procedure to achieve this.Is the below explanation correct? I set up OSPF on UOW-TAU ge-0/0/0 interface and ge-0/0/1 interface and UOW-HAM ge-0/0/0 interface and ge-0/0/1 interface. let me call this Area 100. Once I have done this I should be able to reach each node from others using ping and traceroute. Any help is highly appreciated.

    Read the article

  • .htm pages working but .aspx pages throwing errors

    - by Mike
    Our site has thousands of visitors per day but we've been receiving reports from some of our members that they are able to hit our main page, presumably because it's a .htm page, but when they click off to a .aspx page, they get an error. I've done as much research as I know how and here is what I have come up with: - We have not made any changes to IIS on our server in months. - We have a couple of customers that have been willing to work with us and provide information about their system. One customer is running Vista, the other is running XP. - We had one of the customers test both Firefox and MSIE. She gets same error in both. - One customer said, "We were able to post the profile and searched on available jobs and it worked w/ Firefox for a day, then it just quit working...we did not change any settings to Firefox after we posted the profile." - We asked the customer to clear their cache and try again. They responded with, "We just cleared the cache and got the same error; btw, I periodically clear the cache -- almost every day." Summary We have thousands of customers that hit our site with no problem. We can't reproduce these errors. These customers are getting the same error in different browsers. They are only getting the error on .aspx pages. They still get the error after clearing their cache. We would appreciate any thoughts on what other questions we could ask these customers or thoughts on how we can further troubleshoot this problem.

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • Postfix selective header_checks: smtpd_relay_restrictions vs. smtpd_recipient_restrictions

    - by luke
    Some of my customers implemented commercial software that violate email-RFCs such that we have had to relax our header checks. In consequence, we receive more spam. Prolog: I know the domains (customer.com) and IP-addresses (a.b.c.d/C) these emails come from Kind request for help: Is it possible to setup one Postfix (2.11) instance on Linux such that: It applies only some header checks for emails from .*@customer.com But applies all header checks for all other email sources? I thought of a combination of mynetworks that includes the subnet a.b.c.d/C in smtpd_recipient_restrictions -- allowing all these messages through -- and simultaneously avoid an open-relay with smtpd_relay_restrictions. However, this has not worked out as expected. Any idea or help is highly appreciated. Thanks in advance. Luke ==EDIT== For the current issue, I solved the problem by prepending REDIRECTs to header_checks as follows: /^received: from.*customer.com.*by mail.own.com.*for.*luke@own.*/ REDIRECT [email protected] This works so far as neeeded. Irrespective thereof, I am still looking for a postfix configuration that would turn this text-based setting into an IP-Address-Range based forwarding rule.... Thanks. Luke

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • Excel 2007 - Adding line breaks in a cell and no line over 50 characters

    - by Richard Drew
    I have notes stored in an excel cell. I add line breaks and dates every time I add a new note. I need to copy this to another program, but it has a line limit of 50 characters. I want a line break for each new date and for when each date's comment goes over 50 characters. I'm able to do one or the other, but I can't figure out how to do both. I'd prefer words not to be split up, but at this point I don't care. Below is some sample input. If needed for a SUBSTITUTE or REPLACE function, I could add a ~ before each date in my input as a delimiter. Sample Input: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates and locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] 05/14 - Copy sent to John Public and [email protected] Ideal Output: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates an d locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] om 05/14 - Copy sent to John Public and email@custome r.com

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • Datacentre Rack naming convention with flexibility for reassignment of server roles

    - by g18c
    We are just shifting across to a new rack and until now have used names of cartoon characters. This is not going to work anymore, and need a better naming convention. Physically i would like to name the servers by location, and then have an alias as to its actual function/customer, i.e. Physical name LONS1R1SVR1 meaning London, suite 1, rack 1, server 1 Customer Alias Since the servers can be reassigned from time to time, for the above physical server name, i would have an alias as a column in a spreadsheet, that would be set to the customers host-name, i.e. wwww.customerserver1.com Patching For patching, I am looking at labeling up the physically connections, i.e. LON1S1R1SVR1-PWR1 LON1S1R1SVR1-PWR2 LON1S1R1SVR1-ETH0 LON1S1R1SVR1-KVM Ultimately if i am labeling cables, I really want to avoid putting LON1S1R1SQLSVR on any patch cord in case the server gets formatted and changed from a SQL server to a WWW server which would need to relabel all the patch cords also. In addition, throwing in virtual machines, i have got confused very quickly. I appreciated that it may be confusing having a physical host-name and customer alias. Please let me know what you run with and any other standards or best practices that i can follow?

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • Email Proxy Ideas

    - by jtnire
    Hi Everyone, I wish to host some managed email servers for some customers. Each customer will have their own email server which will be an all-in-one virtual machine running postfix, dovecot and some webmail suite. Even though each customer will have their own server, I do not wish to give each email server it's own public facing IP. I wish to avail the use of proxy servers so all customers use the same public IP. As for the "smtp-in" from the public internet, this isn't a problem as I can set up many mx servers (using postfix) which will store-and-forward the mail to the correct server (using transport maps). As for the IMAP access from the customer, I was thinking of using perdition which is an IMAP proxy - I believe that this will suit my needs. I am confused however on what to use for the "smtp-out" proxy. The customers will have to authenticate with their receptive email server, however they will have to go via a proxy of some sort as they won't have direct access to their server instance. It probably can't be a store-and-forward proxy either. Does anyone have any idea on what I could use here? Many Thanks

    Read the article

  • Private subnet for VM server host-only network

    - by Derek Pressnall
    At my current job, we distribute a product based on a Linux server with multiple VMs defined (using KVM / libvirt). We are planning to expose limited ports to the customer's network, and use iptables to direct inbound traffic to the appropriate internal VM. My question: is there a class of private subnets that I can use for the internal host-only network that is least likely to conflict with a client IP subnet? Specifically, if I choose a /24 out of any of the RFC-1918 defined private subnets (such as 192.168.x.x), there is a chance of conflicting with a customer-used range. I noticed that several current VM implementations default to 192.168.122.x -- is this due to an RFC that I'm not familiar with, and therefore this is a safe range to use (that most network admins would avoid)? Or did the various VM vendors just pick that range randomly? I guess I'm looking for an IP range that is more private than the existing private (RFC1918) addresses. The only other thought I had was to use one of the "Test Net" IP ranges reserved for documentation purposes (RFC 5737). Note, that I'm not worried about a customer's network blocking these IPs, as this is only internal to our server (packets get NATted before leaving the box). However this does seem more unorthodox than just sticking with the default 192.168.122.x/24 subnet.

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • struts2-json-plugin not retrieving json data from action class for Struts-JQuery-Plugin grid

    - by thebravedave
    Hello, Im having an issue getting json working with the struts-jquery-plugin-2.1.0 I have included the struts2-json-plugin-2.1.8.1 in my classpath as well. Im sure that I have my struts-jquery-plugin configured correctly because the grid loads, but doesnt load the data its supposed to get from the action class that has been json'ized. The documentation with the json plugin and the struts-jquery plugin leaves ALOT of gaps that I cant even find with examples/tutorials, so I come to the community at stackoverflow. My action class has a property called gridModel thats a List with a basic POJO called Customer. Customer is a pojo with one property, id. I have a factory that supplies the populated List to the actions List property which i mentioned called gridModel. Heres how i set up my struts.xml file: <constant name="struts.devMode" value="true"/> <constant name="struts.objectFactory" value="guice"/> <package name="org.webhop.ywdc" namespace="/" extends="struts-default,json-default"> <result-types> <result-type name="json" class="com.googlecode.jsonplugin.JSONResult"> </result-type> </result-types> <action name="login" class="org.webhop.ywdc.LoginAction" > <result type="json"></result> <result name="success" type="dispatcher">/pages/uiTags/Success.jsp</result> <result name="error" type="redirect">/pages/uiTags/Login.jsp</result> <interceptor-ref name="cookie"> <param name="cookiesName">JSESSIONID</param> </interceptor-ref> </action> <action name="logout" class="org.webhop.ywdc.LogoutAction" > <result name="success" type="redirect">/pages/uiTags/Login.jsp</result> </action> </package> In the struts.xml file i set the and in my action i listed in the action configuration. Heres my jsp page that the action loads: <%@ taglib prefix="s" uri="/struts-tags" % <%@ taglib prefix="sj" uri="/struts-jquery-tags"% <%@ taglib prefix="sjg" uri="/struts-jquery-grid-tags"% <%@ page language="java" contentType="text/html" import="java.util.*"% Welcome, you have logged in! <s:url id="remoteurl" action="login"/> <sjg:grid id="gridtable" caption="Customer Examples" dataType="json" href="%{remoteurl}" pager="false" gridModel="gridModel" > <sjg:gridColumn name="id" key="true" index="id" title="ID" formatter="integer" sortable="false"/> </sjg:grid> Welcome, you have logged in. <br /> <b>Session Time: </b><%=new Date(session.getLastAccessedTime())%> <h2>Password:<s:property value="password"/></h2> <h2>userId:<s:property value="userId"/></h2> <br /> <a href="<%= request.getContextPath() %>/logout.action">Logout</a><br /><br /> ID: <s:property value="id"/> session id: <s:property value="JSESSIONID"/> </body> Im not really sure how to tell what json the json plugin is creating from the action class. If i did know how i could tell if it wasnt formed properly. As far as I know if I specificy in my action configuration in struts.xml, that the grid, which is set to read json and knows to look for "gridModel" will then automatically load the json to the grid, but its not. Heres my action class: public class LoginAction extends ActionSupport { public String JSESSIONID; public int id; private String userId; private String password; public Members member; public List<Customer> gridModel; public String execute() { Cookie cookie = new Cookie("ywdcsid", password); cookie.setMaxAge(3600); HttpServletResponse response = ServletActionContext.getResponse(); response.addCookie(cookie); HttpServletRequest request = ServletActionContext.getRequest(); Cookie[] ckey = request.getCookies(); for(Cookie c: ckey) { System.out.println(c.getName() + "/cookie_name + " + c.getValue() + "/cookie_value"); } Map requestParameters = ActionContext.getContext().getParameters();//getParameters(); String[] testString = (String[])requestParameters.get("password"); String passwordString = testString[0]; String[] usernameArray = (String[])requestParameters.get("userId"); String usernameString = usernameArray[0]; Injector injector = Guice.createInjector(new GuiceModule()); HibernateConnection connection = injector.getInstance(HibernateConnection.class); AuthenticationServices currentService = injector.getInstance(AuthenticationServices.class); currentService.setConnection(connection); currentService.setInjector(injector); member = currentService.getMemberByUsernamePassword(usernameString, passwordString); userId = member.getUsername(); password = member.getPassword(); CustomerFactory customerFactory = new CustomerFactory(); gridModel = customerFactory.getCustomers(); if(member == null) { return ERROR; } else { id = member.getId(); Map session = ActionContext.getContext().getSession(); session.put(usernameString, member); return SUCCESS; } } public String logout() throws Exception { Map session = ActionContext.getContext().getSession(); session.remove("logged-in"); return SUCCESS; } public List<Customer> getGridModel() { return gridModel; } public void setGridModel(List<Customer> gridModel) { this.gridModel = gridModel; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } public String getUserId() { return userId; } public void setUserId(String userId) { this.userId = userId; } public String getJSESSIONID() { return JSESSIONID; } public void setJSESSIONID(String jsessionid) { JSESSIONID = jsessionid; } } Please help me with this problem. You will make my week, as this is a major bottleneck for me :( thanks so much, thebravedave

    Read the article

  • JQuery Window blinking

    - by Nisanth
    Hi All I am developing a chat application(Customer and operator). Using jquery Ajax and PHP . From customer side he can handle multiple chat.. For example ha have two chat.. How he knows in which window the new msg comes.. I can take the count . But is there any option in jquery to blink the window when count change ?

    Read the article

  • ASP.NET MVC: routing help

    - by pcampbell
    Consider two methods on the controller CustomerController.cs: //URL to be http://mysite/Customer/ public ActionResult Index() { return View("ListCustomers"); } //URL to be http://mysite/Customer/8 public ActionResult View(int id) { return View("ViewCustomer"); } How would you setup your routes to accommodate this requirement? How would you use Html.ActionLink when creating a link to the View page?

    Read the article

  • T-SQL Script to Delete All The Relationships Between A Bunch Of Tables in a Schema and Other Bunch i

    - by Galilyou
    Guys, I have a set of tables (say Account, Customer) in a schema (say dbo) and I have some other tables (say Order, OrderItem) in another schema (say inventory). There's a relationship between the Order table and the Customer table. I want to delete all the relationships between the tables in the first schema (dbo) and the tables in the second schema (inventory), without deleting the relationships between tables inside the same schema. Is that possible? Any help appreciated.

    Read the article

  • JasperReports multi-page report with different content

    - by Han Fastolfe
    I'm evaluating JasperReport and iReport, a requirement is the possibility to produce a multiple page report in which every page contains a different report. Example: Page 1 contains an actual invoice for a customer Page 2 contains the invoices list for the customer Page 3 contains a graph of amount of invoices by year Page 4 contains only fixed text (say operator instructions ...) Is it possible to create such a unique report instead of creating four standalone report and then merge the pdfs. Thank a lot. Francesco

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >