Search Results

Search found 16560 results on 663 pages for 'far'.

Page 574/663 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Cliq Wireless questions

    - by Nathan Adams
    Heres the deal: I am by no means a Linux expert, even less when it comes to the Android OS but lets see if we can't solve this problem. The problem I am having is that on the Cliq we have a broadcom chip. In order to use the wireless card you must first insert the module into the kernel. Fine: # insmod /system/lib/dhd.ko insmod /system/lib/dhd.ko # lsmod lsmod dhd 164936 0 - Live 0xbf000000 # BUT netcfg (or ifconfig in busybox) does not recognize that there is a wireless adapter there: # netcfg netcfg lo UP 127.0.0.1 255.0.0.0 0x00000049 dummy0 DOWN 0.0.0.0 0.0.0.0 0x00000082 rmnet0 UP 14.67.164.2 255.255.255.252 0x00001043 rmnet1 DOWN 0.0.0.0 0.0.0.0 0x00001002 rmnet2 DOWN 0.0.0.0 0.0.0.0 0x00001002 usb0 DOWN 0.0.0.0 0.0.0.0 0x00001002 # busybox ifconfig busybox ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:282 errors:0 dropped:0 overruns:0 frame:0 TX packets:282 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:18754 (18.3 KiB) TX bytes:18754 (18.3 KiB) rmnet0 Link encap:Ethernet HWaddr EE:83:E8:B4:4A:ED inet addr:14.x.x.x Bcast:14.67.164.3 Mask:255.255.255.252 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7148 errors:0 dropped:0 overruns:0 frame:0 TX packets:7659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2609236 (2.4 MiB) TX bytes:908575 (887.2 KiB) # For giggles if we attempt to launch wpa_supplicant anyways we get this: # wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf wpa_supplicant -Dwext -ieth0 -c/data/misc/wifi/wpa_supplicant.conf ioctl[SIOCSIWPMKSA]: No such device ioctl[SIOCSIWMODE]: No such device Could not configure driver to use managed mode ioctl[SIOCGIFFLAGS]: No such device Could not set interface 'eth0' UP ioctl[SIOCGIWRANGE]: No such device ioctl[SIOCGIFINDEX]: No such device CTRL-EVENT-STATE-CHANGE id=-1 state=0 ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWENCODEEXT]: No such device ioctl[SIOCSIWENCODE]: No such device ioctl[SIOCSIWAUTH]: No such device WEXT auth param 7 value 0x0 - Failed to disable WPA in the driver. ioctl[SIOCSIWAUTH]: No such device WEXT auth param 5 value 0x0 - ioctl[SIOCSIWAUTH]: No such device WEXT auth param 4 value 0x0 - ioctl[SIOCSIWAP]: No such device ioctl[SIOCGIFFLAGS]: No such device # In dmesg we get: <4>[18300.494065] dhd_oob_enable_intr : enable <4>[18305.019976] dhd_net_start failed bus is not ready <4>[18305.020278] dhdsdio_probe: dhd_net_start failed! Do I need to specify the firmware with insmod? Why are we trying to control the interface manually instead of through the Android API? The Android API doesn't support ad-hoc connections as far as I can tell. The card, I am sure, most certainly can.

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • my multi boot can't boot to XP 'resumeobject' is missing

    - by GwenKillerby
    In my multi boot set up, booting to vista and 7 goes fine, but when I try to boot to XP, i get an error Windows failed to start. A recent hardware or software change might be the cause. To fix the problem: 1. Insert your Windows installation disc and restart your computer. 2. Choose your language settings, and then click "Next." 3. Click "Repair your computer." If you do not have this disc, contact your system administrator or computer manufacturer for assistance. File: \NTLDR Status: 0xc000000e Info: The selected entry could not be loaded because the application is missing or corrupt. See below. Clearly the resumeobject seems to be missing in the XP entry ("Real Mode Boot sector"), only I don't know how to restore it. Vista is on **C:**, Win7 is on **F:** (as is the bootmgr ??? ) and WinXP is on **E:** What I've tried: [1] I've used about 5 windows discs, that is the Recovery Consoles from real XP install CD's and 3 virtual Recovery Consoles. All failed. The real CD's work ONE time, but won't let me finish, I only got as far as [b]fixboot E:[/b] Then they shut the laptop down, I kid you not. On a next startup, all 5 CD's ask me for some Admin password that I've never added! [2] I have VisualBCD and EasyBCD, but the most obvious things I tried there didn't solve the problem. So know I don't exactly know what to do with them. [3] I CAN boot into XP with the FIX NTLDR workaround of http://milescomer.com/tinyempire.com/notes/ntldrismissing.htm, but it doesn't fix it permanently QUESTION: How do I fix it permanently? bcdedit /enum output: Windows Boot Manager -------------------- identifier {bootmgr} device partition=F: path \bootmgr description Windows Boot Manager locale en-US default {current} displayorder {current} {812e27a9-27b7-11e4-8fb4-dfa8174ae8dc} {812e27ac-27b7-11e4-8fb4-dfa8174ae8dc} timeout 30 resume No Windows Boot Loader ------------------- identifier {current} device partition=C: path \Windows\system32\winload.exe description Vista locale nl-NL osdevice partition=C: systemroot \Windows resumeobject {73d8b5bc-2764-11e4-b181-806e6f6e6963} Windows Boot Loader ------------------- identifier {812e27a9-27b7-11e4-8fb4-dfa8174ae8dc} device partition=F: path \Windows\system32\winload.exe description Daisy Etta locale en-US osdevice partition=F: systemroot \Windows resumeobject {b8c234a4-27b0-11e4-b8b3-806e6f6e6963} Real-mode Boot Sector --------------------- identifier {812e27ac-27b7-11e4-8fb4-dfa8174ae8dc} device partition=E: path \NTLDR description XP Thank you.

    Read the article

  • Why are emails sent from my applications being marked as spam?

    - by Brian
    Hi. I have 2 web apps running on the same server. The first is www.nimikri.com and the other is www.hourjar.com. Both apps share the same IP address (75.127.100.175). My server is through a shared hosting company. I've been testing my apps, and at first all my emails were being delivered to me just fine. Then a few days ago every email from both apps got dumped into my spam box (in gmail and google apps). So far the apps have just been sending emails to me and nobody else, so I know people aren't manually flagging them as spam. I did a reverse DNS lookup for my IP and the results I got were these: 100.127.75.in-addr.arpa NS DNS2.GNAX.NET. 100.127.75.in-addr.arpa NS DNS1.GNAX.NET. Should the reverse DNS lookup point to nimikri.com and hourjar.com, or are they set up fine the way they are? I noticed in the email header these 2 lines: Received: from nimikri.nimikri.com From: Hour Jar <[email protected]> Would the different domain names be causing gmail to think this is spam? Here is the header from one of the emails. Please let me know if any of this looks like a red flag for spam. Thanks. Delivered-To: [email protected] Received: by 10.231.157.85 with SMTP id a21cs54749ibx; Sun, 25 Apr 2010 10:03:14 -0700 (PDT) Received: by 10.151.130.18 with SMTP id h18mr3056714ybn.186.1272214992196; Sun, 25 Apr 2010 10:03:12 -0700 (PDT) Return-Path: <[email protected]> Received: from nimikri.nimikri.com ([75.127.100.175]) by mx.google.com with ESMTP id 28si4358025gxk.44.2010.04.25.10.03.11; Sun, 25 Apr 2010 10:03:11 -0700 (PDT) Received-SPF: neutral (google.com: 75.127.100.175 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=75.127.100.175; Authentication-Results: mx.google.com; spf=neutral (google.com: 75.127.100.175 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] Received: from nimikri.nimikri.com (localhost.localdomain [127.0.0.1]) by nimikri.nimikri.com (8.14.3/8.14.3) with ESMTP id o3PH3A7a029986 for <[email protected]>; Sun, 25 Apr 2010 12:03:11 -0500 Date: Sun, 25 Apr 2010 12:03:10 -0500 From: Hour Jar <[email protected]> To: [email protected] Message-ID: <[email protected]> Subject: [email protected] has invited you to New Event MIME-Version: 1.0 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit

    Read the article

  • !! 0xc01a00d !! aka Vista won't boot

    - by Chris
    Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box. One of my users has an HP DV5-1235dx laptop running Windows Vista Professional x64. Last night, our WSUS server pushed out a few updates including "Security Update for Windows Vista for x64-based Systems (KB960859)". When we try to boot the laptop today, a black screen with white text comes up displaying: xxx/169894 (something) Where xxx increments rapidly and something is some dll or registry key. Eventually that stops and the screen displays !! 0xc01a00d !! 35566/169894 (\Registry\Machine\COMPONENTS\DerivedDat...) No other computers that received this update are displaying the same error. So far I've tried running CHKDSK off of HBCD. It repaired a thing or two, but the computer still doesn't boot. I tried repairing the Windows install from the Vista CD, but I get a black screen with white text displaying something along the lines of: 0 No Emulation System Type 00 1 No Emulation System Type 00 Select one of the above Booting in Last Known Good Configuration doesn't work. Booting in Safe Mode freezes at Loading Windows Files [snip] Loaded: \windows\system32\drivers\crcdisk.sys Please wait... My next step is trying to boot Safe Mode with Command Prompt and try to run rstrui.exe. While I do that, does anybody have any guidance? Edit: Booting into Safe Mode with Command Prompt will not work. See Booting in Safe Mode above. Edit 2: I managed to boot from the Vista DVD. I ran the system repair, and now I get a black screen with white text saying: !! 0xc0000034 !! 290/169894 (_0000000000000000.cdf-ms) Edit 3: I ran the system repair again, and it attempted to repair my hard drive. It failed. Problem Signature: Problem Event Name: Startup Repair V2 Problem Signature 01: External Media Problem Signature 02: 6.0.6001.18000.6.0.6001.18000 Problem Signature 03: 4 Problem Signature 04: 196611 Problem Signature 05: CorruptVolume Problem Signature 06: NoBootFailure Problem Signature 07: 0 Problem Signature 08: 0 Problem Signature 09: unknown Problem Signature 10: 1168 OS Version: 6.0.6002.2.2.0.256.1 Locale ID: 1033 Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box.

    Read the article

  • nginx, php-fpm, and multiple roots - how to properly try_files?

    - by Carson C.
    I have a server context which is rooted in a login application. The login application handles, well, logins, and then returns a redirect to "/app" on the same server if a login is successful. The application is rooted elsewhere, which is handled by the location block shown here: location ^~ /app { alias /usr/share/nginx/www/website.com/content/public; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; include fastcgi_params; } } This works just fine, however the $uri getting passed to PHP still contains /app, even though I am using alias rather than root. Because of this, the try_files directive fails to a 404 unless I link app -> ./ in /usr/share/nginx/www/website.com/content/public. It's obviously silly to have that link in there, and if that link ever gets lost, bam dead website without an obvious cause. The next thing I tried... Was to remove the try_files directive entirely. This allowed me to rm the app link in my /public folder, and PHP had no problem locating the file and executing it. I used that to dump my $_SERVER global from PHP, and found that "SCRIPT_FILENAME" => "/usr/share/nginx/www/website.com/content/public/index.php" when the browser URI is /app. This is exactly right. Based on my fastcgi_params below, this led me to beleive that try_files $request_filename =404; should work, but no dice. nginx still doesn't find the file, and returns 404. So for right now, it will only work without any try_files directive. PHP finds the file, whereas try_files could not. I understand this may be a PHP security risk. Can anyone indicate how to move forward? The nginx logs don't contain anything relating to the failed try_files attempt, as far as I can see. fastcgi_aparams fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https;

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • KeePass justification

    - by Jeff Walker
    I work at a place that tries to take security seriously, but sadly, they often fail. Currently, one of the major ways they fail is password management. I personally have about 20 accounts (my personal user id on lots of machines). For shared "system" accounts, there are about 45 per environment; development, test, and production. I have access to 2 of those, so my personal total is somewhere around 115 accounts. Passwords have to be at least 15 characters with some extensive but standard complexity constraints, and have to be changed every 60 days or so (system accounts every year). They also should not be the same for different accounts, but that isn't enforced. Think DoD-type standards. There is no way to remember and keep up with this. It just isn't humanly possible, as far as I'm concerned. This might be a good justification of a centralized account management system, a la LDAP or ActiveDirectory, but that is a totally different battle. Currently the solution is an Excel spreadsheet. They use Excel to put a password on it, and then most people make a copy and remove the password. This makes my stomach turn. I use KeePass for this problem and it manages all of my account very well. I like the features like auto-typing, grouping, plugins, password generation, etc. It uses AES-256 encryption via the .Net framework, and while not FIPS compliant, it has a very good reputation. The only problem is that they are also very careful about using randomly downloaded software. So we have to justify every piece of software on our workstations. I have been told that they really don't want me to use this, be cause of the "sensitive nature" of storing passwords. sigh My justification has to be "VERY VERY strong". I have been tasked with writing a justification for KeePass, but as I am lazy, I would like any input that I can get from the community. What do you recommend? Is there something out there that is better or more respected than KeePass? Is there any security experts saying interesting things on this topic? Anything will help at this point. Thanks.

    Read the article

  • Slow Windows Explorer on Windows 7

    - by MadBoy
    I have Laptop with i7 (4 cores), 8GB ram and SSD OCZ Vertex 3 MaxIOPS which in testing that I did just now does 400mb/s+ read/write. However the responsiveness of Windows Explorer is far from being perfect. Opening up Computer, Documents, going into folders is very slow (1-5seconds). I don't have any viruses or spyware and I have tried changing properties to optimize view for General Items. I tried disabling Search Indexer but it made search in Outlook 2010 crawl and didn't bring any other effect. Even double clicking on file takes some time to open things up (like clicking a Word document). I don't have any drives mapped, my computer is not joined to domain. I have multiple VPN connections that I connect to but they all have disabled default gateways. I tried using CC Cleaner or some Windows 7 Tweaks app to disable some things. I am power user using Visual Studio, Tortoise SVN and other developer/administration apps. Any non obvious ideas? Edit: So I've been trying to pinpoint where the issue comes from and it seems that straight after reboot Windows Explorer opens very fast, when I load 3-4 programs (Royal TS, Visual Studio, Outlook) it's noticeably slower and the more programs I have it gets worse. After I start closing programs it starts working better and if I leave 2 open it's fast again. I tried doing some research with DiskMon and other programs from sysinternals but couldn't find anything suspicious. Below are stats during normal usage with a lots of programs open: - Ram usage with a lot of programs open and no swap file (i disabled it for testing): 6.95GB - CPU usage: 15%, none of the cores takes more then 50% (I have VS 2010 open x 4) HD Tune Pro: OCZ-VERTEX3 MI Benchmark Test capacity: full Read transfer rate Transfer Rate Minimum : 363.9 MB/s Transfer Rate Maximum : 505.5 MB/s Transfer Rate Average : Access Time : Burst Rate : CPU Usage : HD Tune Pro: OCZ-VERTEX3 MI File Benchmark Drive C: Transfer rate test File Size: 500 MB Sequential read 484102 KB/s Sequential write 444714 KB/s Random read 7779 IOPS Random write 16888 IOPS Random read (queue depth = 32) 73007 IOPS Random write (queue depth = 32) 69790 IOPS HD Tune Pro: OCZ-VERTEX3 MI Random Access Test capacity: full Read test Transfer size operations / sec avg. access time max. access time avg. speed 512 bytes 3260 IOPS 0.306 ms 2.106 ms 1.592 MB/s 4 KB 4161 IOPS 0.240 ms 2.006 ms 16.256 MB/s 64 KB 2382 IOPS 0.419 ms 2.367 ms 148.934 MB/s 1 MB 449 IOPS 2.225 ms 4.197 ms 449.407 MB/s Random 809 IOPS 1.235 ms 6.551 ms 410.527 MB/s HD Tune Pro: OCZ-VERTEX3 MI Extra Tests Test capacity: full Random seek 3975 IOPS 0.252 ms 1.941 MB/s Random seek 4 KB 4245 IOPS 0.236 ms 16.583 MB/s Butterfly seek 4086 IOPS 0.245 ms 1.995 MB/s Random seek / size 64 KB 3812 IOPS 0.262 ms 58.606 MB/s Random seek / size 8 MB 120 IOPS 8.348 ms 485.737 MB/s Sequential outer 4524 IOPS 0.221 ms 282.721 MB/s Sequential middle 4429 IOPS 0.226 ms 276.818 MB/s Sequential inner 5504 IOPS 0.182 ms 344.000 MB/s Burst rate 4472 IOPS 0.224 ms 279.475 MB/s

    Read the article

  • How is incoming SMTP mail being delivered despite blocked port

    - by Josh
    I setup a MX mail server, everything works despite port 25 being blocked, I'm stumped as to why I am able to receive email with this setup, and what the consequences might be if I leave it this way. Here are the details: Connections to SMTP over port 25 and 587 both reliably connect over my local network. Connections to SMTP over port 25 are blocked from external IPs (the ISP is blocking the port). Connections to Submission SMTP over port 587 from external IPs are reliable. Emails sent from gmail, yahoo, and a few other addresses all are being delivered. I haven't found an email provider that fails to deliver mail to my MX. So, with port 25 blocked, I am assuming other MTA servers fallback to port 587, otherwise I can't imagine how the mail is received. I know port 25 shouldn't be blocked, but so far it works. Are there mail servers that this will not work with? Where can I find more about how this is working? -- edit More technical detail, to validate that I'm not missing something silly. Obviously in the transcript below I've replaced my actual domain with example.com. # DNS MX record points to the A record. $ dig example.com MX +short 1 example.com $ dig example.com A +short <Public IP address> # From a public server (not my ISP hosting the mail server) # We see port 25 is blocked, but port 587 is open $ telnet example.com 25 Trying <public ip>... telnet: Unable to connect to remote host: Connection refused # Let's try openssl $ openssl s_client -starttls smtp -crlf -connect example.com:25 connect: Connection refused connect:errno=111 # Again from a public server, we see port 587 is open $ telnet example.com 587 Trying <public ip>... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Postfix ehlo example.com 250-example.com 250-PIPELINING 250-SIZE 10485760 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250-DSN 250-BINARYMIME 250 CHUNKING quit 221 2.0.0 Bye Connection closed by foreign host. Here is a portion from the mail log when receiving a message from gmail: postfix/postscreen[93152]: CONNECT from [209.85.128.49]:48953 to [192.168.0.10]:25 postfix/postscreen[93152]: PASS NEW [209.85.128.49]:48953 postfix/smtpd[93160]: connect from mail-qe0-f49.google.com[209.85.128.49] postfix/smtpd[93160]: 7A8C31C1AA99: client=mail-qe0-f49.google.com[209.85.128.49] The log shows that a connection was made to the local IP on port 25 (I'm not doing any port mapping, so it is port 25 on the public IP too). Seeing this leads me to hypothesize that the ISP block on port 25 only occurs when a connection is made from an IP address that is not known to be a mail server. Any other theories?

    Read the article

  • What are some of the best wireless routers for a price-conscious home power-user?

    - by Alain
    I'm extremely dissatisfied with the 'popular' choice for routers in homes and small offices. They are expensive (upwards of 60$), lack a great deal of useful configuration options, and seem to need to be restarted quite often. (Linksys comes to mind). I've been on the market for a good router lately, and slowly collecting a set of requirements I feel good routers should meet. Maximum number of TCP/IP connections. - This isn't something I see any routers advertise, but in terms of supporting torrent applications, I've been screwed by routers that support less than 20 here. From what I understand a fairly standard number is 200, but there are not so expensive routers that support thousands. Router configuration menu - Most have standard menu's that let you set up basic things like your wireless network encryption settings, uPnP, and maybe even DMZ (demilitarized zones). An absolute requirement for me, however, are routers with good enough firmware to support: Explicit Port forwarding Assigning static local ips to specific mac addresses, or at least Port forwarding by MAC address Port, IP and MAC filtering Dynamic DNS service for home users who want to set up a server but have a dynamic IP Traffic shaping (ideally) - giving priority to packets from certain machines or over certain ports. Strong wireless signal - If getting a reliable signal requires me to be so close to the router that I can connect an Ethernet cable, it's not good enough. As many Ethernet ports as possible. - Because I want to be able to switch from console gaming to PC gaming without visiting my router. So far, the best thing I've stumbled upon (in the bargain bin at staples) was a 20$ retail plus router. It was meant to be the cheapest alternative until I could find something better to purchase online, but I was actually blown away by the firmware capabilities. It supports defining reserved bandwidth for certain network traffic, dynamic DNS, reserving local IPs for specific MAC addresses, etc. At 2 am when my roommate is killing our Internet with their torrents, I can limit their bandwidth without outright blacklisting them. I have, however, met serious limitations when it comes to network traffic between local machines. It claims a 300Mbps connection, but I have trouble streaming videos from my PC to my console or other laptops wirelessly. It has a meltdown and needs to be reset once in a while (no more than a couple times a month), and it's got a 200 connection limit. There 4 Ethernet ports in the back but I'm pretty sure the first doesn't work. So some great answers to this question would be: Any metrics you use to compare routers, and requirements you have for new candidates. The best routers you've found for supporting home servers, file management systems, high volume torrent traffic, good price/feature ratio, etc. Good configuration advice (aside from 'use Ethernet whenever possible') Thanks for your feedback and experiences!

    Read the article

  • getting Internet connection sharing working in a slightly more complicated configuration

    - by tirichitirca t
    I have the following configuration: Computer A - Mac OSX 10.8.4, wireless & wired adapters Computer B - Windows 7 (64 bit), wireless & wired adapters, has internet connection via the wired adapter (ethernet) d-link wired/wireless router. Problem to solve: Connect from computer A to the internet through the wired connection of computer B. I tried the following: I set up a local network between A and B using the d-link router. The configuration is this: D-link router - 192.168.0.1 A - wired connection to the d-link router, static 192.168.0.101 (I could have used the wireless but I preferred the wired connection) B - wireless connection to the d-link router DHCP 192.168.0.102 (but I made sure it always gets the same address) B - wired connection to the internet using some address that begins with 10.x.y.z. In this configuration A can see B. I enabled ICS on the wired adapter of B. I set up the Gateway of A to point to B and DNS servers to point to the DNS servers specified for the 10.x.y.z address. It doesn't work, A goes only as far as B. It can ping the 10.x.y.z address of B though. I then found this article: http://terrybritton.com/windows-internet-connection-sharing-ics-not-working-with-linux-bridging-is-the-solution-916/. Terry is suggesting that a bridge should be defined on B between the two connections. I tried that but basically computer B is screwed as soon as I create the bridge. It can't connect to the internet anymore. It is as if the network bridge seems to think the traffic to the internet should go from the wired connection to the wireless and not the other way around. The other thing that puzzles me is the router itself. In general the router needs an internet address. In a normal configuration it is the router that gets the ip address and the internet traffic goes through the router. In my case I am not interested in that. So, any suggestions to get this working? I wouldn't shy away from using a commercial software but I would think windows 7 should allow me to do it. Thanks

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • Yahoo Webmail - Garbled Quote Text

    - by baultista
    I've encountered a very strange problem when trying to reply to e-mail via my Yahoo Web Mail from a family member's computer. She received an e-mail from a client who is using Microsoft Outlook. When I receive the message it looks perfectly fine in my browser and I can read it. However, when I try to reply to the e-mail the quoted text looks as such: > #yiv9181642880 p.yiv9181642880msonormal1114, #yiv9181642880 > li.yiv9181642880msonormal1114, #yiv9181642880 > div.yiv9181642880msonormal1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880msoacetate1114, #yiv9181642880 > li.yiv9181642880msoacetate1114, #yiv9181642880 > div.yiv9181642880msoacetate1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880emailquote1114, #yiv9181642880 > li.yiv9181642880emailquote1114, #yiv9181642880 > div.yiv9181642880emailquote1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880msochpdefault1114, > #yiv9181642880 li.yiv9181642880msochpdefault1114, > #yiv9181642880 div.yiv9181642880msochpdefault1114 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} > #yiv9181642880 p.yiv9181642880msonormal53, #yiv9181642880 > li.yiv9181642880msonormal53, #yiv9181642880 > div.yiv9181642880msonormal53 > {margin-right:0in;margin-left:0in;font-size:12.0pt;} It's the strangest thing. It doesn't happen with all e-mails except this particular one. At a glance it almost looks like raw CSS code that's being displayed, but I really can't understand why. So far I have tried the following: Try a different browser, both IE11 and Google Chrome Check the browser encoding settings Check Yahoo Web Mail's encoding/font settings My only other guess is that the client used some weird font or formatting on the e-mail that is throwing the message body out of sync. Unfortunately for my family member, she is a contractor working with a medium-sized company that refuses to provide her with a domain e-mail address, so she is forced to conduct business this way. Simply asking the sender to use a more widely supported font wouldn't be an acceptable solution here. Any thoughts?

    Read the article

  • Transferring a Windows 8 license and proper un- and reinstallation

    - by Kiwi
    Long story short I have two computers: a laptop and a desktop computer. Both have Windows 7 on them. I buy the Windows 8 Pro upgrade. To see if it screws up anything, I install it on my laptop as a guinea pig. I intend to use Windows 8 for my main computer, my desktop, but I want to test it on the laptop, so I know I don’t risk losing access to my desktop and the data on it. I never use my laptop, and only used it, because it already has a Windows 7 installation on it. The problem At some point, I must have entered the license key on my laptop, because when I go to the activation screen on my desktop, I get this: Uh-oh. I can’t use the key on my desktop. Now how the hell do I transfer the key from my laptop to my desktop computer? Answers and suggestions so far Let’s just say that I tried everything possible to get some answers on this matter. The best response I got from Microsoft is this: To install Windows 8 on your desktop, do the following: Uninstall Windows 8 on your laptop Afterwards, install Windows 8 on your desktop If it won’t activate, call product activation at (...) I am not a fan of that last point. The error message does allude to such a solution, however: If you’ve reinstalled Windows or made changes to your hardware recently, you may be able to use your current key. The question My main question is this: has anyone been in a similar situation, and if so, what did you do to resolve this? Failing that, what is the proper way to uninstall the Windows 8 installation on my laptop, and reinstall the Windows 8 installation on my desktop? Ad 1 I have already tried using the “reset” feature on my laptop, but that only resulted in a new Windows 8 installation that was already activated. But which is the right way to uninstall the installation in a way that allows me to use the license key on the desktop computer? Ad 2 Which is the proper way to reinstall the Windows 8 installation on my desktop computer? Why do I even have to reinstall it in the first place? I won’t get around to do this, until my USB key with 3.0 support arrives in the mail, but it is going to be a while, until I find a assuaging response to the best way to go about this anyway.

    Read the article

  • OSX: Howto start VirtualBox VM on startup?

    - by snies
    The Question How do i start this Wiki VM at the startup of the OSX Server? I am running OSX Server 10.6.8 and VirtualBox 4.1.8 r75467 and a Debian Linux VM (called "wiki"). . What I tried so far Following this article: http://mikkel.hoegh.org/blog/2010/12/23/run-virtualbox-boot-mac-os-x/, i have wrote this plist and placed it in /Library/LaunchDaemons/bar.foo.WikiVirtualBox.plist: <plist version="1.0"> <dict> <key>Label</key> <string>bar.foo.WikiVirtualBox</string> <key>ProgramArguments</key> <array> <string>/usr/bin/VBoxHeadless</string> <string>-s</string> <string>wiki</string> </array> <key>RunAtLoad</key> <true></true> <key>UserName</key> <string>root</string> <key>WorkingDirectory</key> <string>/var/root</string> <key>StandardErrorPath</key> <string>/var/log/bar.foo.WikiVirtualBox.stderr.log</string> <key>StandardOutPath</key> <string>/var/log/bar.foo.WikiVirtualBox.stdout.log</string> </dict> </plist> and told launchd to start it: sudo launchctl load -w /Library/LaunchDaemons/bar.foo.WikiVirtualBox.plist . The Logfile But the VM doesn't start. A Look at tail -f /var/log/system.log shows: sudo[1909]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/bin/launchctl load -w /Library/LaunchDaemons/bar.foo.WikiVirtualBox.plist VBoxSVC[1914]: 3891612: (connectAndCheck) Untrusted apps are not allowed to connect to or launch Window Server before login. VBoxSVC[1914]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. com.apple.launchd[1] (bar.foo.WikiVirtualBox[1910]): Exited with exit code: 1 When i log into the server via ssh (so no login window opened) i can run: /usr/bin/VBoxHeadless -s wiki and it works. So i don't understand the error above.

    Read the article

  • Upgrading TFS 2005 to TFS 2010 fails at "Executing servicing step Upgrade Version Control Identities"

    - by nadeemmar
    Hi all, I have been trying to upgrade our TFS 2005 to TFS 2010 but with no luck so far. I went through the TFS Installation guide and many upgrade guides but with no luck in overcoming the issue I am facing which seems to be unique and different to other described issues. In our company, we have a domain forest with several domains. Lets say domain A, B, and C. TFS is in domain A and has users from all these three domains. All domains have trust reltionships between them. However, domain C was deleted several months ago. In the upgrade process, whenever I reach the collection upgrade step, the following error is raised: [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Data: ExtensionType = Microsoft.TeamFoundation.VersionControl.Server.PlugIns.WorkspaceSecurityNamespaceExtension [Info @09:57:50.997] [2010-12-29 09:55:47Z] Servicing step Create VersionControl Security Namespaces passed. (ServicingOperation: UpgradePreTfs2010Databases; Step group: Upgrade.TfsVersionControl) [Info @09:57:50.997] [2010-12-29 09:55:47Z] Executing servicing step Upgrade Version Control Identities. (ServicingOperation: UpgradePreTfs2010Databases; Step group: Upgrade.TfsVersionControl) [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Performer: VersionControl [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Type: UpgradeIdentity [Info @09:57:50.997] [2010-12-29 09:55:47Z][Informational] Step Data Text: [Error @09:57:50.997] [2010-12-29 09:55:51Z][Error] Sync error for identity: System.Security.Principal.WindowsIdentity, S-1-5-21-1004336348-527237240-682003330-2818 - The trust relationship between the primary domain and the trusted domain failed I looked for the SID and it seems to be for a user in the deleted domain C. With a bit of googling, I figured out that TFSConfig Identities command can be used to remap users from one domain to the other. I went ahead and created local users that matches the users we have from domain C and ran the TFSConfig Identities /Change command and it executed successfully. However, I still get the same error. I am stuck and can't figure out how to move forward :( I need your expertise, has anyone faced this issue before? Do I need to change these identities on TFS 2005 before I commence the upgrade? I forgot to mention, I am following the upgrade with a move approach. I created a virtual machine for testing the upgrade. Installed SQL server 2008, restored the TFS databases and installed TFS 2010 and ran the upgrade wizard. Regards, Nadeem

    Read the article

  • Odd squid transparent redirect behavior

    - by EMiller
    This is the first time I've set up squid. It's running a redirect script that does some text search/replace on html pages, and then saves them to a location on the same machine on the nginx path - then issues the redirect to that URL (it's an art project :D). The relevant lines in squid.conf are http_port 3128 transparent redirect_program /etc/squid/jefferson_redirect.py The jefferson_redirect.py script is based on this script: http://gofedora.com/how-to-write-custom-redirector-rewritor-plugin-squid-python/ The issue: I'm getting strange http redirect behavior. For example, here is the normal request/response from a PHP script that issues a header("Location:"); - a 302 redirect: http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.1 302 Found Date: Tue, 13 Apr 2010 05:15:43 GMT Server: Apache X-Powered-By: PHP/5.2.11 Location: http://www.google.com/search?q=yreka Content-Type: text/html Vary: User-Agent,Accept-Encoding Content-Encoding: gzip Content-Length: 2108 Keep-Alive: timeout=3, max=100 Connection: Keep-Alive Here's what it looks like when running through the squid proxy (note that "redirector.mysite.com" is not the site running squid or nginx): http://redirector.mysite.com/?unicmd=g+yreka GET /?unicmd=g+yreka HTTP/1.1 Host: redirector.mysite.com User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc12 Firefox/3.5.9 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Proxy-Connection: keep-alive If-Modified-Since: Tue, 13 Apr 2010 05:21:02 GMT HTTP/1.0 200 OK Server: nginx/0.7.62 Date: Tue, 13 Apr 2010 05:21:10 GMT Content-Type: text/html Content-Length: 17865 Last-Modified: Tue, 13 Apr 2010 05:21:10 GMT Accept-Ranges: bytes X-Cache: MISS from jefferson X-Cache-Lookup: HIT from jefferson:3128 Via: 1.1 jefferson:3128 (squid/2.7.STABLE6) Connection: keep-alive Proxy-Connection: keep-alive It is basically working - but the URL http://redirector.mysite.com/?unicmd=g+yreka remains unchanged, while displaying the google page (mostly broken as it's using URLs relative to redirector.mysite.com) I've experienced a similar thing with google results pages: when clicking to another page from google, I get a google URL, with the other site's content. Sorry for the long post - many thanks if you've read this far! Any ideas?

    Read the article

  • Use launchctl to fire an AppleScript script periodically

    - by Daktari
    I have written an AppleScript that lets me back up a particular file. The script runs fine inside AppleScript Editor: it does what it's supposed to do perfectly. So far so good. Now I'd like to run this script at timed intervals. So I use launchctl & .plist to make this happen. That's where the trouble starts. the script is loaded at set interval by launchctl the AppleScript Editor (when open) brings its window (with that script) to the foreground but no code is executed when AppleScript Editor is not running, nothing seems to be happening at all Any ideas as to why this is not working? -- After editing (as per Daniel Beck's suggestions) my plist now looks like: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <false/> <key>Label</key> <string>com.opera.autosave</string> <key>ProgramArguments</key> <array> <string>osascript</string> <string>/Users/user_name/Library/Scripts/opera_autosave_bak.scpt</string> </array> <key>StartInterval</key> <integer>30</integer> </dict> </plist> and the AppleScript I'm trying to run: on appIsRunning(appName) tell application "System Events" to (name of processes) contains appName end appIsRunning --only run this script when Opera is running if appIsRunning("Opera") then set base_path to "user_name:Library:Preferences:Opera Preferences:sessions:" set autosave_file to "test.txt" set autosave_file_old to "test_old.txt" set autosave_file_older to "test_older.txt" set autosave_file_oldest to "test_oldest.txt" set autosave_path to base_path & autosave_file set autosave_path_old to base_path & autosave_file_old set autosave_path_older to base_path & autosave_file_older set autosave_path_oldest to base_path & autosave_file_oldest set copied_file to "test copy.txt" set copied_path to base_path & copied_file tell application "Finder" duplicate file autosave_path delete file autosave_path_oldest set name of file autosave_path_older to autosave_file_oldest set name of file autosave_path_old to autosave_file_older set name of file copied_path to autosave_file_old end tell end if

    Read the article

  • Get 5.1 surround sound from computer through a VCR config?

    - by Wedding Nails
    I'm posting to see if my idea of this setup is right and can be done. I currently have the following "equipment": a JVC VCR -quite old-, which has built in surround sound (aka it has several speaker outputs, which I believe is 5.1 and are connected to several speakers that are in every corner of the room), a computer with SPDIF optical output and a new flat screen TV (with built in HDMI). I want the computer to take advantage of the VCR's surround system (all the speakers in the room) in order to play mainly music and video always with all the speakers (5.1) and with the maximum sound quality. Currently, the computer plays sound only through the front speaker (I connect one output to the on board pc audio input) and the quality is really bad. As a side note, the computer video runs with S-video (old school), and the picture quality as you would imagine, is really bad with the new big LCD screen. My main goals are: to upgrade the picture with a new video card which would support HDMI (my tv has HDMI). to buy a SPDIF optical cable, connect one end to the VCR SPDIF input and the other end to the PC output This is theoretically what I've researched so far, and I came out with several questions: in this case, with the SPDIF cable connected, and all the configurations done in windows allowing the 5.1, will I get every content I play "converted" or played through all of my speakers? (I read this forum post). I already know that in order for this setup to play from all the speakers, the content/audio source has to be 5.1. but my question is, if there is a way to play from all of the speakers no matter what type of content I'm playing (that's why I said conversion there) I already know that HDMI cables carry digital sound. Is there a way I can only use said HDMI cord to the tv, and get sound through the VCR? (I'm not too sure about this, I would have to disable the TVs speakers and use the VCR surround as default, but I have no clue wether this can be done or not). Update: The ultimate question is, do I really have to rely on "sound virtualization" technology to get sound from all the speakers, no matter what content I play? (do I require a newer sound card, like a creative soundblaster with said technology?) Thanks!

    Read the article

  • Does a router have a receiving range?

    - by Aadit M Shah
    So my dad bought a TP-Link router (Model No. TL-WA7510N) which apparently has a transmitting range of 1km; and he believes that it also has a receiving range of 1km. So he's arguing with me that the router (which is a trans-receiver) can communicate with any device in the range of 1km whether or not that device has a transmitting range of 1km. To put it graphically: +----+ 1km +----+ | |------------------------------------------------->| | | TR | | TR | | | <----| | +----+ 100m+----+ So here's the problem: The two devices are 1km apart. The first device has a transmitting range of 1km. The second device only has a transmitting range of 100m. According to my dad the two devices can talk to each other. He says that the first device has a transmitting and a receiving range of 1km which means that it can both send data to devices 1km away and receive data from devices 1km away. To me this makes no sense. If the second device can only send data to devices 100m away then how can the first device catch the transmission? He further argues that for bidirectional communication both the sender and the reciver should have overlapping areas of transmission: According to him if two devices have an overlapping area of transmission then they can communicate. Here neither device has enough transmission power to reach the other. However they have enough receiving power to capture the transmission. Obviously this makes absolutely no sense to me. How can a device sense a transmission which hasn't even reached it yet and go out, capture it and bring it back it. To me a trans-receiver only has a transmission power. It has zero receiving power. Hence for two devices to be able to communicate bidirectionally, the diagram should look like: Hence, from my point of view, both the devices should have a transmission range far enough to reach the other for bidirectional communication to be possible; but no matter how much I try to explain to my dad he adamantly disagrees. So, to put an end to this debate once and for all, who is correct? Is there even such a thing as a receiving range? Can a device fetch a transmission that would otherwise never reach it? I would like a canonical answer on this.

    Read the article

  • samba 3.5 "force user" doesn't seem to be sticking

    - by myCubeIsMyCell
    After installing a new OS with newer version of samba, I'm having trouble accessing my shares. I can browse to the specific share, but only to the top level. As best I can tell from the logs, it seems the "force user" in the samba config isn't sticking beyond the initial connection. Details below. I installed a new version of CentOS on my storage server. My old CentOS (4?)install had samba version 3.0.33, new CentOS is using 3.5.10. No domain/AD involved ... just home workgroup. no real security... just some shares hidden & some defined as read-only. here's my config: [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = luna security = share # logs split per machine log file = /var/log/samba/log.%m log level = 2 # max 50KB per log file, then rotate max log size = 50 winbind use default domain = Yes [strge] comment = please path = /storage browseable = yes read only = no force user = windowsguest force group = users guest ok = yes So... the problem I'm running into is that the 'force user' only seems to hold for the initial connection & I see all the top level folders fine. When I drill into a folder I get access denied - which appears to be due to my windows user info being sent (trys to authenticate xuser - a non-existant user to samba, so maps to nobody & fails). Here's the smb error msg: [2012/11/29 14:30:27.326195, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [xuser] -> [xuser] FAILED with error NT_STATUS_NO_SUCH_USER [2012/11/29 14:30:27.326251, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [nobody] -> [nobody] FAILED with error NT_STATUS_NO_SUCH_USER Most of the top level directories are 755, some 777. Either way, can not access them. If I do a chown -R windowsguest.users ... no change... but if I do a chmod -R to 777 or 755 they become browsable... but still can't create files (even for 777 ones). Not sure what role it plays if any... but had to recreate the user windowsguest under the new os install, uid & gid match old user. Seems the main issue as far as I can tell is that samba isn't maintaining the 'force user' - but I could be wildly off base. Client OS is win7 pro x64. Thanks for any suggestions or advice!

    Read the article

  • Microphones not working on Apple macbook Air 1,1 (Early 2008) under Linux

    - by jj_p
    I'm running Linux on an mba. I can't make the microphones (neither external nor internal) work. I test using alsamixer and arecord -d 5 test-mic.waw together with aplay test-mic.waw It seems there is a problem with kernel trying to decipher Apple (intentionally) corrupted 'bios', in particular the mic pins are wrongly assigned. As far as we are concerned here, is there any difference between using EFI and BIOS-compatibility mode? (see https://wiki.archlinux.org/index.php/MacBook where they claim to have everything working out of the box on mba1,1) A nice proposal would be to compile the latest Linux kernel and run hda-jack-retask to find the right configuration (in the case of Realtek codec, the missing things I'm supposed to check are either some vendor-specific COEF verbs, EAPD or GPIO setup.), and then come up with a kernel patch to address the issue. Since I'm not that familiar with this last part of the story, can anyone help me through this process? Some useful data: The output from alsa script run as root http://www.alsa-project.org/db/?f=adae8ebee1007043fe83414ac4972319e02255fa The command hda-jack-sense-test -a (with external mic in) Pin 0x14 (Internal Speaker): present = No Pin 0x15 (Green HP Out): present = Yes Pin 0x16 (Not connected): present = No Pin 0x17 (Not connected): present = No Pin 0x18 (Not connected): present = No Pin 0x19 (Not connected): present = No Pin 0x1a (Not connected): present = No Pin 0x1b (Not connected): present = No Pin 0x1c (Not connected): present = No Pin 0x1d (Not connected): present = No Pin 0x1e (Not connected): present = No Pin 0x1f (Not connected): present = No Most likely the chip is Realtek ALC885 (compare also ALC889A) http://guide-images.ifixit.net/igi/bBTSqaeK5JpQ1AWe.large , although at the moment alsa reads it as ALC889A Takashi Iwai's tutorial https://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio.txt Some people researched the original files from a running OS X installation on this same model (I think the relevant files are AppleHDA.kext/Contents/MacOS/AppleHDA AppleHDA.kext/Contents/PlugIns/AppleHDAHardwareConfigDriver.kext/Contents/Info.p????list AppleHDA.kext/Contents/Resources/layout12.xml.zlib AppleHDA.kext/Contents/Resources/Platforms.xml.zlib) http://www.insanelymac.com/forum/topic/220090-alc889a-pin-configuration/#entry1554954 Datasheet http://www.realtek.info/pdf/ALC885_1-1.pdf (from the same Realtek, one can also try to download Linux driver, but this is just taken from ALSA project, as stated in the readme file.) Compare with this Arch guy http://www.alsa-project.org/db/?f=3ca8243c0626844f0264a3faad0aa72018bc14f4 Here for the first time support to audio (except mics) for mba1,2 (which is morally the same as 1,1) is patched into the kernel http://www.alsa-project.org/pipermail/alsa-devel/2010-February/025511.html The same jack supposedly works both for HP and ext MIC, I think it's called TRRS, and it's the same as the one used e.g. for iphones This guy might have done a similar job, though to a more recent version and for sound globally, not just mics: http://blogs.aerys.in/jeanmarc-leroux/2013/09/15/fixing-2013-macbook-air-ubuntu-sound-issue/ (this is mirror to http://unix.stackexchange.com/questions/73044/microphones-not-working-on-apple-macbook-air-1-1-early-2008-under-linux )

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >