Search Results

Search found 23782 results on 952 pages for 'claims based authorizatio'.

Page 740/952 | < Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >

  • Which steps are required to avoid my server being considered as spam sender?

    - by Cyril N.
    I'm looking to set up a webmail server that will be used by a lots of users that will receive and send emails. They will also have the possibility to forward emails they receive. I'd like to know which steps are recommanded/required to indicate to others Mail services (GMail, Outlook, etc) that my server is not used as a spam sender (disclaimer : IT's NOT ! :p) but a legitimate one. I know I have to define a SPF TXT records for example, but what others steps would you recommend me to do ? For example, is there a formula like having a proportional number of servers based on the amount of email sent (for having a different IP address) ? (something like sending a maximum of 1M emails / per IP / per day ?) Something else I'm missing ? I tried to search online, but I mostly find how to avoid emails sent with scripts (like PHP) being put in the SPAM folder. I'm looking for a server/dns configuration side. Thanks a lot for your help/tips, I appreciate !

    Read the article

  • Connecting a Wifi router to receivers with a cable instead of antenna?

    - by 31eee384
    This is a very strange question--I'd go so far as to say it's a stupid question. I'm being told that it is possible to, to describe it briefly, use a cable to connect an access point and a receiver directly to one another. This means that I would unscrew the access point's antenna, and attach one end of a cable to the port. Then, on the wireless receiver, I would also unscrew the antenna and plug in the other side of the cable. I'm being told the connection would work after this, just as a normal Wifi connection would. Bonus mini-question: if this works, would it still work if a splitter were attached to the access point and multiple receivers plugged in to the network? What would happen if I do this? Based on my surprisingly deficient knowledge of radio transmission, I don't think it would work. I would like some help knowing why it won't (or will) though, if possible. This is a somewhat hypothetical question--I realize that Ethernet does this exact job very handily, and I could just throw in a switch instead of the splitter. I simply feel that I should understand this scenario. Thanks for any help you can offer.

    Read the article

  • How to boot XBMC 10.1 ISO on USB via grub?

    - by Shi
    I am trying to boot the XBMC Live image (http://xbmc.org/download/) as ISO from USB via grub 1.98. I have a Kubuntu 11.04 image there as well already and it works using the following configuration: menuentry "Kubuntu 11.04 64bit" { loopback loop /boot/iso/kubuntu-11.04-desktop-amd64.iso linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/boot/iso/kubuntu-11.04-desktop-amd64.iso noeject noprompt initrd (loop)/casper/initrd.gz } However, if I try to boot XBMC in an analogue way, I always get an error "Unable to find a medium containing a live file system". I found different approaches to install XBMC, but they all are about installing the distribution on USB, or using grub4dos, or unetbootin. I already found out that XBMC 10.1 is based on Ubuntu 10.04.2 LTS, so I tried those settings - even though they are quite similar to Kubuntu 11.04. Finally, the ISO contains a grub configuration as well in boot/grub/grub.cfg, but even with those parameters, I get the error above. My current configuration is the following one: menuentry "xbmc 10.1" { loopback loop /boot/iso/xbmc-10.1-live.iso linux (loop)/live/vmlinuz video=vesafb boot=live iso-scan/filename=/boot/iso/xbmc-10.1-live.iso xbmc=autostart,nodiskmount splash quiet loglevel=0 persistent quickreboot quickusbmodules notimezone noaccessibility noapparmor noaptcdrom noautologin noxautologin noconsolekeyboard nofastboot nognomepanel nohosts nokpersonalizer nolanguageselector nolocales nonetworking nopowermanagement noprogramcrashes nojockey nosudo noupdatenotifier nouser nopolkitconf noxautoconfig noxscreensaver nopreseed union=aufs initrd (loop)/live/initrd.img } Any more ideas or any more information I should supply?

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • nginx short urls for mediawiki

    - by William
    I am trying to do short URLs for a MediaWiki site. The wiki is in a subdirectory mydir (http://www.example.com/mywiki). I've already set up rewrites in /etc/nginx/sites-available so that example.com redirects to example.com/mywiki. Currently the URL is like http://www.example.com/mywiki/index.php?title=Main_Page. I want to clean up the url so that it looks like http://www.example.com/mywiki/Main_Page. I am having quite a bit of trouble doing this. I am not familiar with regular expressions or the syntax that the nginx config files use. This is what I currently have: server_name example.com www.example.com; location / { rewrite ^.+ /mywiki/ permanent; } location /wiki/ { rewrite ^/mywiki/([^?]*)(?:\?(.*))? /mywiki/index.php?title=$1&$2 last; } The second rewrite is obviously the one that's broken. It is based off of Page title -- nginx rewrite--root access in the MediaWiki documentation. When I try to load the site, the browser tells me I get infinite redirects. Does anyone who how I should go about fixing this issue? Or rather, what is the correct way to implement this, and what do all those symbols mean?

    Read the article

  • Preventing endless forwarding with two routers

    - by jarmund
    The network in quesiton looks basically like this: /----Inet1 / H1---[111.0/24]---GW1---[99.0/24] \----GW2-----Inet2 Device explaination H1: Host with IP 192.168.111.47 GW1: Linux box with IPs 192.168.111.1 and 192.168.99.2, as well as its own route to the internet. GW2: Generic wireless router with IP 192.168.99.1 and its own route to the internet. Inet1 & Inet2: Two possible routes to the internet In short: H has more than one possible route to the internet. H is supposed to only access the internet via GW2 when that link is up, so GW1 has some policy based routing special just for H1: ip rule add from 192.168.111.47 table 991 ip route add default via 192.168.99.1 table 991 While this works as long as GW2 has a direct link to the internet, the problem occurs when that link is down. What then happens is that GW2 forwards the packet back to GW1, which again forwards back to GW2, creating an endless loop of TCP-pingpong. The preferred result would be that the packet was just dropped. Is there something that can be done with iptables on GW1 to prevent this? Basically, an iptables-friendly version of "If packet comes from GW2, but originated from H1, drop it" Note1: It is preferable not to change anything on GW2. Note2: H1 needs to be able to talk to both GW1 and GW2, and vice versa, but only GW2 should lead to the internet TLDR; H1 should only be allowed internet access via GW2, but still needs to be able to talk to both GW1 and GW2. EDIT: The interfaces for GW1 are br0.105 for the '99' network, and br0.111 for the '111' network. The sollution may or may not be obnoxiously simple, but i have not been able to produce the proper iptables syntax myself, so help would be most appreciated. PS: This is a follow-up question from this question

    Read the article

  • How can I totally flatten a PDF in Mac OS on the command line?

    - by Matthew Leingang
    I use Mac OS X Snow Leopard. I have a PDF with form fields, annotations, and stamps on it. I would like to freeze (or "flatten") that PDF so that the form fields can't be changed and the annotations/stamps are no longer editable. Since I actually have many of these PDFs, I want to do this automatically on the command line. Some things I've tried/considered, with their degree of success: Open in Preview and Print to File. This creates a totally flat PDF without changing the file size. The only way to automate seems to be to write a kludgy UI-based AppleScript, though, which I've been trying to avoid. Open in Acrobat Pro and use a JavaScript function to flatten. Again, not sure how to automate this on the command line. Use pdftk with the flatten option. But this only flattens form fields, not stamps and other annotations. Use cupsfilter which can create PDF from many file formats. Like pdftk this flattened only the form fields. Use cups-pdf to hook into the Mac's printserver and save a PDF file instead of print. I used the macports version. The resulting file is flat but huge. I tried this on an 8MB file; the flattened PDF was 358MB! Perhaps this can be combined with a ghostscript call as in Ubuntu Tip:Howto reduce PDF file size from command line. Any other suggestions would be appreciated.

    Read the article

  • Possible to have different SSLCACertificateFiles under different Location in Apache (client side ssl certs)

    - by Mikko Ohtamaa
    I am setting up Apache to do smartcard authentication. The smartcard login is based on client-side SSL certificates handled by an OS driver. I have currently just one smartcard provider, but in the future there are potentially several of them. I am not sure how Apache 2.2. handles client-side certifications per Location. I did some quick testing and it somehow seemed that only the last SSLCACertificateFile directive would have been effective and this doesn't sound right. Is it possible to have different SSLCACertificateFile per Location in Apache (2.2, 2.4) as described below or is SSL protocol somehow limiting that you cannot have more than one SSLCACertificateFile per IP? Example potential config below how I wish to handle several SSLCACertificateFile on the same server to allow users to log in with different smartcard provides. <VirtualHost 127.0.0.1:443> # Real men use mod_proxy DocumentRoot "/nowhere" ServerName local-apache ServerAdmin [email protected] SSLEngine on SSLOptions +StdEnvVars +ExportCertData # Server-side HTTPS configuration SSLCertificateFile /etc/apache2/certificate-test/server.crt SSLCertificateKeyFile /etc/apache2/certificate-test/server.key # Normal SSL site traffic does not require verify client SSLVerifyClient none SSLVerifyDepth 999 # Provider 1 <Location /@@smartcard-login> SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/ca.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Provider 2 <Location /@@smartcard-login-provider-2> # For real SSLVerifyClient require SSLCACertificateFile /etc/apache2/certificate-test/provider2.crt # Apache does not natively pass forward headers # created by SSLOptions +StdEnvVars, # so we pass them forward to Python using RequestHeader # from mod_headers RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e </Location> # Connect to Plone ZEO client1 running on fg ProxyPass / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ ProxyPassReverse / http://localhost:8080/VirtualHostBase/https/local-apache:443/folder_sits/sitsngta/VirtualHostRoot/ </VirtualHost>

    Read the article

  • Securing NTP: which method to use?

    - by Harry
    Can someone good at NTP configuration please share which method is the best/easiest to implement a secure, tamper-proof version of NTP? Here are some difficulties... I don't have the luxury of having my own stratum 0 time source, so must rely on external time servers. Should I read up on the AutoKey method or should I try to go the MD5 route? Based on what I know about symmetric cryptography, it seems that the MD5 method relies on a pre-agreed set of keys (symmetric cryptography) between the client and the server, and, so, is prone to man-in-the-middle attack. AutoKey, on the other hand, does not appear to work behind a NAT or a masquerading host. Is this still true, by the way? (This reference link is dated 2004, so I'm not sure what is the state of art today.) 4.1 Are public AutoKey-talking time servers available? I browsed through the NTP book by David Mills. The book looks excellent in a way (coming from the NTP creator after all), but the information therein is also overwhelming. I just need to first configure a secure version of NTP and then may be later worry about its architectural and engineering underpinnings. Can someone please wade me through these drowning NTP waters? Don't necessarily need a working config from you, just info on which NTP mode/config to try and may be also a public time server that supports that mode/config. Many thanks, /HS

    Read the article

  • Is it better to always copy and delete, rather than move?

    - by nbolton
    Generally speaking, I find myself panicking when I realise that if I cancel a file move, it could cause the target or source to be incomplete. This question applies to Windows and Unix-based platforms. I can never remember exactly how the move command works in either case. For example, if you're moving a directory; does it copy the entire directory, then delete it after, or does it copy then delete each file individually? I always realise after typing something like, mv verybigdir dest that I really should have typed cp -R verybigdir dest && rm verybigdir (where the && operator only moves to the next command if the first was successful) -- or is this pointless? What happens exactly when I press Ctrl+C half way through a move? Likewise, what exactly happens on Windows when I press the cancel button? I can't count the number of times I've moved something (the last time was when using svn) and had two directories, with split contents. I guess the answer is difficult, because not all applications move groups of files in the same way.

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • lenovo x1 carbon windows 8 frequent wifi disconnect issue

    - by hIpPy
    I'm having frequent wifi disconnects on my Lenovo X1 Carbon Touch laptop. I got this laptop 2 months back and it has been happening ever since about 3-5 times a day and 10 times a week on average. I've Frontier Fios internet. Power connected or not does not matter. Once I get disconnected, I try below to connect again in that order: turn Airplane mode on and off, troubleshoot network problems windows troubleshooter), restart the laptop I'd find that the WiFi adapter would get disabled and sometimes windows troubleshooting would help but more than often I'd end up restarting the laptop. A week back, I upgraded my wifi network adapter drivers (now Intel, version 15.5.6.48, 10/3/2012). I still get disconnected frequently but turning Airplane mode on and off gets me connected again. So the driver update did help. Windows 8 is updated. None of the other devices (nexus, iphone phones, nexus7, ipad tablets) would have wifi issues when my laptop would get disconnected. config: Intel(R) Centrino(R) Advanced-N 6205 (WiFi network adapter) Microsoft Windows 8 Pro Microsoft Windows [Version 6.2.9200] x64-based PC LENOVO System Model: 3443CTO X1 Carbon Touch I recently noticed this log message When I got disconnected in event viewer: Your computer was not assigned an address from the network (by the DHCP Server) for the Network Card with network address 0x[XXXXXXXXXXXX]. The following error occurred: 0x79. Your computer will continue to try and obtain an address on its own from the network address (DHCP) server. Any idea?

    Read the article

  • 'Future-proof' Live Audio Capture & Broadcast [migrated]

    - by maxpowers
    I'm looking to implement some live audio broadcasting functionality within a Ruby on Rails site for a client and was hoping I could get some input from people who have tackled this type of thing before. Essentially what I need to do is capture and record a user's audio (via microhpone, line in, etc), then stream that to 1,000+ listeners with very little latency, like sub 2 second if possible. So it looks like we've got 3 parts: Web-based audio capture (likely with Flash or JS) Server to accept audio feed and stream to listeners (likely Icecast or Wowza) Actual audio player (maybe HTML5 w/ Flash as a fallback? Maybe this jPlayer fork) Does RTMP makes sense here? Or maybe HTTP? What's the most 'future-proof' way to make this happen? Building with mobile in mind, but still want to be able stream to anyone. I've found lots of potentially helpful threads and software but I'm struggling to get an idea of how it all fits together. I'm a front end guy and way out of my comfort zone so if anyone has insights to offer, I'd love to hear them.

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Transferring domain from one registrar to another

    - by Macha
    I have a domain from my old web host, which was free with my hosting account. After a few years, I am moving to a VPS. Most of my other domains were registered with Namecheap, so it was just a matter of changing a few DNS records. However, given that my old host does not provide me with a DNS control panel, and I don't want to be paying a full hosting bill for just domains, I'm now looking into transferring it. My old host says there will be a charge of $15 to them. NameCheap's page seems to imply you don't need the current registrar to do anything, but it also seems to be based on sending an email to the one listed in whois. Of course, my old host have whoisguard on the domain so the only email on it is [email protected] (and not a unique [email protected], just [email protected]) which doesn't go to me. Again, there doesn't seem to be an option to disable this. So, is it a case of paying my old host's fee, and paying again for the domain from NameCheap, or is there some other way to transfer my domain? (I'm not really sure which of the trilogy sites this is best for.)

    Read the article

  • What to do with old laptop screens?

    - by Lord Torgamus
    This question is inspired by another SU question I came across earlier today: What to do with old hard drives? It made me think about two long-dead laptops I have with perfectly good screens still inside. One is a Dell Inspiron 5100 and the other is an Averatec E1200, but responses need not be geared towards those particular models' screens. Rules, based heavily on the original question's: Objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, be plain ol' fun, serve a social purpose or benefit the community. Your answer need not be limited to only one screen. For a really good answer, I'll go out and buy additional leftover screens. Your answer need not be limited to one project per screen. If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a useful purpose; art is nice, but functional art is way better. Thanks in advance, folks. EDIT: Found another related question. Fun projects to do with an old 17" LCD monitor EDIT 2: I, for one, am enjoying the new outpouring of creativity here. Best fifty bucks... I mean, rep points... I ever spent. EDIT 3: That does it. At the end of the week, there was a tie for most votes between the accepted answer and the game platform answer. The game platform answer was cooler, but less reasonable as a project to actually do; in other words, it was more moon rocky. Unfortunately, I think fencepost had the best comment on the topic, which is that displays on their own have no good interface. Thanks for playing, everyone!

    Read the article

  • Centralized backup method recommendation for SMEs with various OSes

    - by Akinator
    Hi I was wondering what in your opinion is the "best" method for having "everything" backed-up in the following situation. We are a SMEs with 10 computers in total. Three of those computers are MACs The rest are windows (1 vista, 4 win7 and 2 XPs) I'm very open to what the method should be but you should also consider the follwing: Very limited resources Quite "small" bandwidth (4 MBs for all (download) 0.4 MBs (upload, yep, thats it)- though this might get, a little bit better) One of the main thing to back up would be the mails, considerations: All windows computers use outlook, mainly 2003 There is one mac that uses outlook too (for mac of course - not 2011 yet) We also have to backup the files: Not a huge amount Very few very big files Very organizes (by machine) What I would like is to hear your opinions as to which would be the best method (or combination of methods - preferably one of course) considering. We are not sure what do we need and I'm open to suggestions, though an online (cloud based applications) would be great, remember the the bandwidth is unbearable. Last think to consider, it that we would like to do weekly updates (unless the method is very easy of course). Thanks in advance!! I tried to be as specific as possible, but if anything is needed I'll gladly update, please ask for any clarification needed! Please avoid any answers like upgrade all to windows 7 and throw away your macs :) our's may not be an ideal situation, but it is what it is, and right now, it would be impossible for us to change it for a lot of circumstances.

    Read the article

  • Changing the View of a Folder When It Is Opened from Finder

    - by user60044
    When I was running OS 10.4 I had all my folders beautifully organized with position and which ones should be opened in View mode and which one in Icon, etc. When I transitioned to 10.6 I found that OS ignored all that information and imposes an awful consequence of changing the view for some folder and that view moving up to all parent folders which don't have their views locked. The only way I can think of to get this functionality back is to either write a program that given a root directory will enumerate it setting the views based upon a static template I have. The other way I thought I could accomplish this is by Folder Actions. Aside from the fact that I don't know AppleScript it seems folder actions cannot be inherited. Since I have 1000s of folders involved here, all being inherited from a single root, I could not possibly manually add that action to each of those folders. What I would like to have is a folder action that whenever I open the folder from Finder, then if it contains any JPG, GIF, etc. type image files that it will automatically open that folder with a view of Icon with a reasonable size to support the number of images. If the folder contains only folders, then to open that folder in the list view. Does anyone have anything that could do this for me? Thank You, -- Mark

    Read the article

  • credit or minclass does not work well with pam_cracklib.so in common-password (opeSuSe 11.3)

    - by Mario
    I'm trying to implement password complexities on my pdc. It's a samba PDC with openLDAP backend. I tried cracklib-check but it looks like that I should have a decent and localize version of password library since the library out there usually comes in english. I also have another consideration that we will allow users to use any kind of password - even though it's dictionary based - as long as their passwords integrated with low/upper alphabet, digits, and other characters such as '$' or '_' (pam_cracklib.so calls them as classes). So here is my /etc/pam.d/common-password: #password requisite pam_pwcheck.so nullok cracklib password requisite pam_cracklib.so minclass=4 reject_username ##password requisite pam_cracklib.so \ ## dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1 reject_username password optional pam_gnome_keyring.so use_autht_ok password required pam_unix2.so use_authtok nullok The first commented line (with #) was the default configuration of openSuse 11.3. The 2nd/3rd (with leading ##) is another configuration I use when minclass=4 line is commented out. By the way, I have 'check password script' = /usr/local/sbin/crackcheck -d /usr/share/cracklib/pw_dict and passdb backend = ldapsam:ldap://127.0.0.1 parameters in smb.conf and cracklib-check works fine too. So here is the test I conduct. I logon to windows and then change my password. Sometimes it works fine that it trows error message - which what I wanted, but simple password with only lower alphabets can pass windows change password. Maybe I should make a new library which incorporates local vocabularies, but a guy out there (raise your hand please if you read this :) ) also experienced the same trouble with english word. Besides, what we really want is to let user to choose 2 or 3 format password out of 4 classes. Is there a bug or something with pam module in openSuse 11.3? Thank you in advance. Regards, Mario

    Read the article

  • How can I automatically edit an email before auto-forwarding it?

    - by Miss Cellanie
    Is there a way to automatically edit emails before forwarding them? I'm getting email notifications from Foursquare that I want to send to my phone as text messages. I know how to send messages to my number using an email address (I'm in the US and use Verizon) but I don't know how to strip out any unnecessary formatting, like HTML, before the email gets sent. What I want: Ability to strip out HTML Ability to start forwarding at a specific part of the email based on a search (e.g., I might know that Foursquare starts their messages with "Hey hey!" and only want content after that phrase occurs) Ability to truncate at 160 characters Things I've tried: I'm not using Foursquare DM pings through Twitter because I have two Twitter accounts and Twitter only allows a phone to be linked to one account at a time. I'm not willing to change which account it's linked to. I tried to work around the Twitter limitation using Google Voice, but they don't support SMS short codes. I'll compromise on the features I want if I can find a free solution that doesn't require me to set up my own server. I do think this is computer related because it will happen on my computer, not on my phone. edit My current setup: Gmail in Firefox 3.0.15 on Windows XP. I use a netbook as my only personal computer. However, if the only way to accomplish this well is to set up my own mail server or something, I would still want to know that.

    Read the article

  • How to make Firefox file associations consistent with Ubuntu file associations?

    - by wbharding
    This seems to be a pretty commonly Google question, but one for which there are no answers. http://www.linuxquestions.org/questions/linux-software-2/firefox-download-mime-types-378902 http://www.birkit.com/content/kubuntu-linux/internet/firefox/fix-file-associations-in-firefox.html Being three links amongst the many. The gist of what I want to accomplish is to have Firefox understand the file associations I download without me having to manually map all of them myself. Gnome knows the file extensions, so I would have expected that Firefox could just use the already-known file mappings there to open the right stuff (as I presume Chrome does). But it doesn't. At least not for me, using Firefox 4, and not by default. When I click on a downloaded file right now, Firefox always has to ask me what application should be used to open the file. A handful of Google results tell me that I can reassociate my file extensions by deleting ~/.mozilla/firefox/[profile name]/mimeTypes.rdf, but while deleting that file does in fact result in a new mimeTypes file being generated, the new mimeTypes is just as barren as the old one had been. Based on the amount of unanswered Qs on the Googlesphere, I know this is a very common problem for Ubuntu users, but it seems to be one for which nobody has chimed in with a good solution. Maybe Superuser can finally be the panacea for us all?

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • What switch should we use for PCoIP?

    - by Jay R.
    We have a small lab space that seats 10 people and has 20 machines. Each machine is set to 1920x1200 resolution because the user apps are best used at that resolution. Currently the machines are all located close enough to montors that a DisplayPort cable will reach, but the pending lab remodel positions them around 80 feet or more away in racks. Our proposed solution is to use PCoIP. We purchased 10 PCoIP portals and 20 PCoIP host cards. We plan to set up a dedicated network to handle just the PCoIP traffic. After testing just one portal and one host card with a cheap 1G switch from a local office supply store, we were left with less than good impressions about the usefulness in our lab. The framerates were not spectacular and the mouse seemed jerky. Our concern is that we can't get away with the cheap 1G stuff from the store because adding more machines to the switch will just make the user experience worse. What switch would be recommended to best support our PCoIP situation? We will need to plug in at least 30 cables based on just those machines. Is there a particular feature to search for that makes a difference? Is there a switch that works best with PCoIP? Added Info: The reporting webapp for the host card shows maximum bandwidth usage to be 220000 kbps. The average appears to be around 180000 kbps. The reverse direction is much lower, like 15000 kbps.

    Read the article

< Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >