Search Results

Search found 16150 results on 646 pages for 'si keep'.

Page 123/646 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • CORS Fails on CloudFront Distribution with Nginx Origin

    - by kgrote
    I have a CloudFront distribution set up with an Nginx server as the origin (a Media Temple DV server, to be specific). I enabled the Access-Control-Allow-Origin: * header so fonts will work in Firefox. However, Firefox throws a CORS error for fonts loaded from this CloudFront/Nginx distribution. I created another CloudFront distribution, this time with an Apache server as the origin, and set Access-Control-Allow-Origin: * also. Firefox displays fonts from this origin without issue. I've set up a demo page here: http://kristengrote.com/cors-test/ When I perform a curl request for the same font file from each distribution, both files return almost exactly the same headers: Apache Origin Nginx Origin ——————————————————— ——————————————————— HTTP/1.1 200 OK HTTP/1.1 200 OK Server: Apache Server: nginx Content-Type: application/font-woff Content-Type: application/font-woff Content-Length: 25428 Content-Length: 25428 Connection: keep-alive Connection: keep-alive Date: Wed, 11 Jun 2014 23:23:09 GMT Date: Wed, 11 Jun 2014 23:15:23 GMT Last-Modified: Tue, 10 Jun 2014 22:15:56 GMT Last-Modified: Tue, 10 Jun 2014 22:56:09 GMT Accept-Ranges: bytes Accept-Ranges: bytes Cache-Control: max-age=2592000 Cache-Control: max-age=2592000 Expires: Fri, 11 Jul 2014 23:23:09 GMT Expires: Fri, 11 Jul 2014 23:15:23 GMT Access-Control-Allow-Origin: * Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, HEAD Access-Control-Allow-Methods: GET, HEAD Access-Control-Allow-Headers: * Access-Control-Allow-Headers: * Access-Control-Max-Age: 3000 Access-Control-Max-Age: 3000 X-Cache: Hit from cloudfront X-Cache: Hit from cloudfront Via: 1.1 210111ffb8239a13be669aa7c59f53bd.cloudfront.net (CloudFront) Via: 1.1 fa0dd57deefe7337151830e7e9660414.cloudfront.net (CloudFront) X-Amz-Cf-Id: QWucpBoZnS3B8E1mlXR2V5V-SVUoITCeVb64fETuAgNuGuTLnbzAhw== X-Amz-Cf-Id: E2Z3VOIfR5QPcYN1osOgvk0HyBwc3PxrFBBHYdA65ZntXDe-srzgUQ== Age: 487 X-Accel-Version: 0.01 X-Powered-By: PleskLin X-Robots-Tag: noindex, nofollow So the only conclusion I can draw is that something about Nginx is preventing Firefox from recognizing CORS and allowing the fonts via CloudFront. Any ideas on what the heck is happening here?

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Cannot connect to local network shares when connected to VPN. Error: "the user name could not be found"

    - by Nick G
    I keep finding that on our small company LAN (7 users, 3 servers) that some servers keep becoming "not accessible" for the purposes of file sharing. They display the message "\SERVER is not accessible. You might not have permission to use this network resource. The user name could not be found". But I don't know why "the user name could not be found" as all the machines are on the same domain and the PDC and BDC seem to be behaving OK. EDIT: VPN seems to be the cause: It turns out I can see the server if I use the IP address (\\1.2.3.4\ etc) or the FQ active directory name (eg \server.domainname.local) but not if I use the server name on its own or a mapped network drive originally created from the "short" name. Oddly though, my machine has no issue resolving the server's DNS name as I can ping the machine name OK and it immediately comes back with the IP, however nslookup seems to fail. It seems to be a problem with how Windows looks up machine names when connected to VPNs. When I'm connected to a VPN, windows seems to use the DNS assocated with the VPN and not the one on the domain controller. This behavior to me, seems incorrect as surely that would mean connecting to any VPN would break any ability to lookup local machine names for servers and printers etc. So I guess the real question now is, how can I make my machine still search the local Active Directory DNS (the PDC) even when connected to a VPN? More info in my comments below.

    Read the article

  • How do I make a Data Validation drop-down exclude blanks?

    - by Iszi
    Related: How can I use non-adjacent cells on another sheet for a Data Validation drop-down, and only show non-blank values? For now, I've worked around the above problem by re-arranging my sheet so all the Data Validation Source cells are in one range. I'm leaving the above question open though, because I think it still poses an interesting problem. However, the issue now is that the Data Validation drop-down isn't working in the way I expected it to (and how I believe others are telling me it should). Even though I've got everything into one named range, Excel still shows blanks in a drop-down that references that range. Setup: Sheet 1 A1= (blank) B1= Header A2= 1 B2= Value1 A3= 2 B3= Value2 A4= 3 B4= Value3 A5= 4 B5= (empty) A6= 5 B6= (empty) A7= 6 B7= (empty) Sheet1!B2:B7 is named Validation Sheet2!A1 is set to use Data Validation with a Source =Validation, and in-cell drop-down. The drop-down in Sheet2!A1 shows: Value1 Value2 Value3 . . . (Dots represent blank lines) How can I get rid of these blank lines in the in-cell drop-down, while still including Sheet1!B5:B7 in the Data Validation Source? Note: I nuked the sheet, and tried it again without column A from Sheet1 (putting values from column B in the above example into column A), and it worked fine. Adding Column A back though, brought the blanks back into the Data Validation drop-down. What do I need to do to keep column A as I want it and keep the in-cell drop-down clean?

    Read the article

  • Split horizon, route filtering, and having RIPv2 announce a non-attached route to host

    - by Paul
    Routers A, B & C live at 10.1.1.1, 10.1.1.2 and 10.1.1.3 on a /24 metro Ethernet subnet. Each router also has its own private subnet on another interface. Router B's private subnet links thru a firewall to a 10.20.20.0 network at another organization. Router B redistributes to A and C several static routes for hosts on 10.20.20.0. However, a new host 10.20.20.5/32 must be reached via a different path that goes through router C. I know that C can advertise this host-based route with no problem, but I'd like to keep all my 10.20.20.x static routes in one place. So, how can B tell A via RIPv2 to send packets for 10.20.20.5/32 to C? So far it looks like I need no ip split-horizon on router B's 10.1.1.2 interface, perhaps because B has already learned from C other routes with a next hop of 10.1.1.3. But how does RIPv2 split horizon with no auto-summary and network 10.0.0.0 really work? If B learns a route to ANY 10.x.x.x network or host from A or C, is that enough for split horizon to keep it from redistributing ip route 10.20.20.5 255.255.255.255 10.1.1.3? And if I want to suspend split horizon only for this one new host, how do I filter out the mess of regurgitated routes that B advertises when I try no ip split-horizon? Thanks much.

    Read the article

  • Easyphp Web Setup

    - by Dominique
    I've tried to setup an EasyPHP in local and make it visible from the Web via DynDNS, which I've already successed many times before, but now this just doesn't work, maybe I've forgotten something... *The "server" is a common workstation. Here is what I have done : 1) Installed EasyPhp (with a index.php/html file in WWW folder) 2) Changed the port in the config to port 80 3) Forwarded port 80 to the server IP in my router configuration 4) Added the server to the router DMZ *Also tried removing antivirus/firewall I've installed PortListener, pointed it on port 80, and when I access "myname.dyndns.com" it says Client connected GET / HTTP/1.1 Host: xyz.dyndns-remote.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive So the server is accessible via Web, receive the connection successfully, but in my browser it says that the connection failed and show nothing...

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • HLS video segmenting complications. How to create a transport stream with ffmpeg

    - by Agzam
    I have h264 videos, and currently we're using Apple's HTTP Video Streaming tools and mediafilesegmenter to segment these files. What I need to do is to switch to alternative segmenter based on this very popular open-sourced segmenter The problem is that this segmenter does not just take any video, but takes only MPEG-TS videos. So I have to convert my h264 videos to TS first. I can do that with ffmpeg. I'm using this: ffmpeg -i encoded.mp4 -vcodec h264 -i encoded.mp4 -sameq -acodec aac -strict experimental -f mpegts output.ts But this creates fairly larger output. And the reason is that Apple's segmenter keeps the same codec - AVC and the same audio codec - AAC, whereas ffmpeg changes video format to MPEG Video. The question is: can I somehow keep the same AVC video codec and still convert video to a transport stream? So my goal is to keep the same video quality and same video codecs as Apple's medifilesegmenter does. UPD: Okay... it seems that ffmpeg CAN split videos into segments: ffmpeg -i encoded.mp4 -c copy -map 0 -vbsf h264_mp4toannexb -f segment -segment_time 10 -segment_list test.m3u8 -segment_format mpegts segment%d.ts That's still has one problem: it doesn't create http live streaming index file. (-segment_list creates a file with list of segments, but it doesn't look like HLS index). So, you still have to create index file

    Read the article

  • Browser sends http request with RANGE

    - by nute
    I have a local testing environment in a Fedora virtual machine. Strangely, resources (css and js files) don't seem to work. Looking at Firebug, I see that the browser sends the HTTP request with "Range bytes=0-". The server responds with either an empty 200OK or an empty 206 Partial Content. Here is an example: Response Headers Date Mon, 23 Nov 2009 23:33:26 GMT Server Apache/2.2.13 (Fedora) Last-Modified Thu, 19 Nov 2009 22:58:55 GMT Etag "18-3aec-478c14dbee138" Accept-Ranges bytes Content-Length 15084 Content-Range bytes 0-15083/15084 Connection close Content-Type text/css Request Headers Host fedora.test User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://fedora.test/pictures/ Cookie __utma=26341546.1613992749.1258504422.1258569125.1258752550.4; __utmz=26341546.1258504422.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=tqf8jfmc77qihe97rl4tmhq685 Range bytes=0- If-Range "18-3aec-478c14dbee138" I don't know if the browser is sending the wrong request, or if it's the server that is doing this. Request made to the outside (such as google analytics) are working fine. This is running in Fedora 11 in VirtualBox. Apache. PHP. The files are being served through the "shared folders" feature of VirtualBox (could it be related?). No error logs could help me.

    Read the article

  • Visual Studio 2005 won't install on Windows 7

    - by Peanut
    Hi, My question relates very closely to this question: http://superuser.com/questions/34190/visual-studio-2005-sp1-refuses-to-install-in-windows-7 However this question hasn't provided the answer I'm looking for. I'm trying to install Visual Studio 2005 onto a clean Windows 7 (64 bit) box. However I keep getting the following error when the 'Microsoft Visual Studio 2005' component finishes installing ... Error 1935.An error occurred during the installation of assembly 'policy.8.0.Microsoft.VC80.OpenMP,type="win32-policy",version="8.0.50727.42",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="x86",Please refer to Help and Support for more information. HRESULT: 0x80073712. On my first attempt to install VS 2005 I got a warning about compatibility issues. I stopped at this point, downloaded the necessary service packs and restarted the installation from the beginning. Every since then I just get the error message above. I keep rolling back the installation and trying again ... it's but always the same error. Any help would be very much appreciated. Thanks.

    Read the article

  • stunnel client uses improper SNI when talking to Apache

    - by Huckle
    I have stunnel listening on port 80 and acting as a client connecting to Apache listening on port 443. Configuration is below. What I'm finding is that if I attempt to connect to localhost:80 the connection is fine but if I connect to 127.0.0.1:80 When I check Apache's logs it indicates that stunnel is using localhost as the SNI both times, but the HTTP request lists localhost in one case and 127.0.0.1 in another. Is it possible to tell stunnel to either use whatever is in the HTTP request or to somehow configure two clients each with different SNI values? stunnel.conf: debug = 7 options = NO_SSLv2 [xmlrpc-httpd] client = yes accept = 80 connect = 443 Apache error.log: [error] Hostname localhost provided via SNI and hostname 127.0.0.1 provided via HTTP are different Apache access.log: "GET / HTTP/1.1" 200 2138 "-" "Wget/1.13.4 (linux-gnu)" "GET / HTTP/1.1" 400 743 "-" "Wget/1.13.4 (linux-gnu)" wget: $wget -d localhost ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: localhost Connection: Keep-Alive ---request end--- $wget -d 127.0.0.1 ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive ---request end--- edit: Apache Config Nothing out of the ordinary, it's just a virtual host listening to 443 <VirtualHost *:443>

    Read the article

  • Export Outlook Express XP into Outlook 2007 Windows 7

    - by Jason Moore
    I've searched the forum and it seems my need is a little different than the posts which have already been made. I have an old XP laptop which I was using Outlook Express for one of my mail accounts (a work account). I also ran regular Outlook for another (personal). I want to have both of these accounts on my new PC. My new PC is running Windows 7 wiht Office 2007. I used the Windows transfer cable and things worked failry well. My regular Outlook files all came over well, but nothing for Outlook Express. I really would like to take this time to somehow export my OE files into my Outlook 2007. I would really like to keep the account which imported in on Outlook seperate. Question 1: Is there a way to import my OE files into Outlook 2007? Question 2: Is there a way to have two seperate email accounts in Outlook without combining them? Basically I want to have a work email and a personal and keep them seperate. If I can't have two seperate emails with Outlook, can anyone suggest something which would allow me to export my old Outlook (the old personal emails) into another program so that I can at least use my work email on Outlook 2007? Hopefully this isn't too confusing.

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • Split horizon, route filtering, and having RIPv2 announce a non-attached route to host...

    - by Paul
    Routers A, B & C live at 10.1.1.1, 10.1.1.2 and 10.1.1.3 on a /24 metro Ethernet subnet. Each router also has its own private subnet on another interface. Router B's private subnet links thru a firewall to a 10.20.20.0 network at another organization. Router B redistributes to A and C several static routes for hosts on 10.20.20.0. However, a new host 10.20.20.5/32 must be reached via a different path that goes through router C. I know that C can advertise this host-based route with no problem, but I'd like to keep all my 10.20.20.x static routes in one place. So, how can B tell A via RIPv2 to send packets for 10.20.20.5/32 to C? So far it looks like I need no ip split-horizon on router B's 10.1.1.2 interface, perhaps because B has already learned from C other routes with a next hop of 10.1.1.3. But how does RIPv2 split horizon with no auto-summary and network 10.0.0.0 really work? If B learns a route to ANY 10.x.x.x network or host from A or C, is that enough for split horizon to keep it from redistributing ip route 10.20.20.5 255.255.255.255 10.1.1.3? And if I want to suspend split horizon only for this one new host, how do I filter out the mess of regurgitated routes that B advertises when I try no ip split-horizon? Thanks much.

    Read the article

  • Can you share offline files cache with two user accounts?

    - by Joel Coehoorn
    I have a new laptop that I use for both home and work. It runs windows 7 ultimate, and is joined to the domain at work. It is okay to use this laptop for both work and personal activities, and I even have an account set up on the local machine in addition to the work domain account specifically for this to help keep the two separate. At home, I have a file server that I use to share files and printers with my wife's laptop, this new laptop, and my old desktop which will now become the family machine. My mp3 library is on there, among other things. What I want to do is use the windows Offline Files feature to keep a synced copy of my music library on the laptop. That part is easy. What's tricky is that I want to share this offline cache between both the local account on the laptop and my work domain account. I could do them both separately, but then I have two copies of a very large music library stored locally. This also means twice the sync burden, when the domain account is rarely connected to the file share. I really want to be able to sync from the local machine account only, and have the domain account be able to use the synced files. I know where the offline file cache is kept (\Windows\CSC) and I can find the cached files (not encrypted), but permissions on the cache are setup weird, and so using that cache directly is not trivial. Any ideas appreciated.

    Read the article

  • Mapped networkdrive on logout

    - by Robuust
    I'm using a script to keep a mapped networkconnection alive, but ofcourse the mapped connection is gone when I logout. The point is now, that I'm running this on Windows Server 2008 R2, where I use remote desktop to login on the administrator account. However, it should remain logged in and not remove the mapped connection as this script takes care of not logging out on MS office 365 sharepoint. Is there a way to keep the mapped networklocation (L:) available after logout? So the script can run to remain the connection? # Create an IE Object and navigate to my SharePoint Site $ie = New-Object -ComObject InternetExplorer.Application $ie.navigate('https://xxx.sharepoint.com/') # Don't need the object anymore, so let's close it to free up some memory $ie.Quit() # Just in case there was a problem with the web client service # I am going to stop and start it, you could potentially remove this # part if you want. I like it just because it takes out a step of # troubleshooting if I'm having problems. Stop-Service WebClient Start-Service WebClient # We are going to set the $Drive variable here, this is just # going to tell the command what drive letter to map you can # change this to whatever you want (if you change it to a # drive that is already mapped it will overwrite it, so be careful. $Drive = "L:" # You can change the drive destiniation to whatever you want, # it has to be a document library or folder of course. $DrvDest = "https://xxx.sharepoint.com/files/" # Here is where we create the object to map the network drive and # then map the network drive $net = New-Object -ComObject WScript.Network; $net.mapnetworkdrive($Drive,$DrvDest) # That is the end of the script, now schedule this with task # scheduler and every so often and you should be set.

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Postfix relay gives error 450 while it should be 550

    - by dieter-be
    Hi, we use postfix to do relaying. We get several messages like the following in /var/log/mail (slightly edited) Apr 13 13:30:29 linserver postfix/smtpd[1064]: NOQUEUE: reject: RCPT from unknown[$ip]: 450 4.1.1 <[email protected]>: Recipient address rejected: undeliverable address: host domain.be [$ip] said: 550 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table (in reply to RCPT TO command); from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<BLUESTREAK.domain.local> Now, when the master mail servers gives a 550, claiming that the user does not exist, I want the relay to also give a 550 back. What happens now is that it seems to return a 450, causing clients to keep messages queued, keep trying and only notify users after a certain period has passed. According to what I could find, the soft_bounce could cause this. But we have not enabled this option (and by default it's off according to postfix docs) It might also have something to do with the *_reject_code postconf values. Especially since the log message complains the unknown ip. But as you can see in the postconf output below, smtpd_sender_restrictions and smtpd_client_restrictions are empty. So even if it would try to do any restrictions there, 550 is the "worst" error going on, so that's what I expect to be returned to the client. postconf: http://sprunge.us/JYgB Thanks, Dieter

    Read the article

  • Script to unstick VNC keys?

    - by MidnightLightning
    I've run into dropped key events when connecting via VNC clients, leading to a "stuck key" (usually a meta key like CTRL or ALT) and searching around the common answer on how to solve it is often "press and release each meta key individually until the problem resolves". However, I've found this to be annoying and time consuming to try and solve it this way. Plus on a bad connection, it sometimes will miss the "key up" event for the meta key again, and still keep the key stuck. So I'm looking for an automated way to do this: From a script on the client side or the server side, is there a way to trigger "key up" events for all the meta keys (CTRL, ALT, SHIFT, and WIN/CMD, both Left and Right versions)? Or just a command to release all keys the server thinks are down at the moment? Or some scripted way to at least list which keys the server end thinks are down so I know which key to keep pressing and releasing to try and release it? I've got a Mac on the server end, so a Mac/Linux solution would be needed for my situation.

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Multicast image restoration with adaptive speed

    - by Clinton Blackmore
    I'm curious to know if there are any tools for restoring disk images (or even transferring files) via multicast -- for any platform, especially if the project has source available -- where the multicast rate adjusts itself on the fly. On the Mac, all multicast solutions I am aware of (such as Deploy Studio, and NetRestore before it) make use of multicast ASR (apple software restore), which has one glaring deficiency -- you have to set the multicast speed before you start sending a disk image over the network, and that speed is locked in. Either your clients can keep up and restore, or they can't*. It seems to me that it must be possible for the multicast server to adjust the data rate, so you basically say "start sending this image", clients connect, and, if they can't keep up, they tell the server so it slows down. (Likewise, I'd expect the server to try speeding up if no client is having difficulties keeping up, and I'd expect to be able to cap that maximum throughput so that other network activities can go on without being resource starved.) So, what sort of tools are out there? For Linux? Windows? Is there something for the Mac I've overlooked. [It just kills me that it is true that, by the time you get multicast up and going at a good speed to restore a lab, you could've unicasted the data to all the computers and be done.] * There is a little leeway involved. I think individual clients can say, "I missed a little bit of data" and get it, and they can opt to listen in the next time the image is sent over the network, but on the whole, if they missed it the first go round, you have to image the machine again, and there is no time savings.

    Read the article

  • NVidia Graphic Card troubleshooting

    - by Berry Langerak
    My dad has a HP pavilion laptop, which is over 6 years old. It contains an onboard graphic card, a NVidia GeForce 8400M GS. Since a week or two, he's getting BSOD in Windows Vista. Also, the screen was acting up; strange lines and figures keep creeping up, but only once in a while -- not always. Thinking that it might be the drivers he had recently upgraded, I decided to downgrade the driver, with no luck at all. I then continued to deinstall the driver, and reinstalling the current stable version, but alas, to no prevail. Windows kept crashing during the course of the last two weeks, and thinking it might be Windows breaking up, I decided to upgrade his Vista installation to Windows 7, as I've been hearing good things about it. Alas, even that didn't help, and it seems the strange lines and figures keep popping up. I've opened the laptop itself, thinking it might simply be overheating because of clogging in the vents, but that's not the case, it's clean as a whistle. Now for the actual question; does anyone have a decent flowchart on how to troubleshoot such issues? It might well be the videocard itself, as the strange lines and figures sometimes even appear during BIOS and POST, but I'm not sure.

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >