Search Results

Search found 6992 results on 280 pages for 'exist'.

Page 35/280 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Why doesn't my htaccess redirect work?

    - by cosmicbdog
    I have setup a simple htaccess redirect which looks like this (this is the whole .htaccess file): Options +FollowSymLinks RewriteEngine On Redirect 301 /something http://something.com/something.php If I then load the site which contains this .htaccess, ie, myredirectsite.com/something I end up with the following 404: The requested URL /something was not found on this server. Apache/2.2.3 (Red Hat) Server at myredirectsite.com Port 80 And the logs: [Tue Jul 10 14:25:46 2012] [error] [client xx.xx.xxx.xx] File does not exist: /home/sites/scp/something Something is not a file, and something does not exist. I have assumed I could use Redirect the same as a Rewrite but it looks like the redirect needs to be for a file that actually exists? I created the file 'something' and it just attempts to load the blank file. No redirect. What am I missing in getting this working?

    Read the article

  • Convert SQL Query results to Active Directory Groups

    - by antgiant
    Are there any quality products (ideally open source) that allow me to run an arbitrary SQL query that results in 2 columns (username, group name) and they adds that username in AD to a group of that name in AD? If the username doesn't exist it is ignored. If the group name doesn't exist ideally it gets created. Updated for Clarity: I have a MSSQL based system that is the authoritative source for some of the Active Directory Security groups, and their members. I want to be able to to have those Active Directory Security Groups populated by a one-way sync originating from MSSQL. Sadly the MSSQL based system does not have a good API, so I will have to do this with direct SQL calls. Is there anything that does this well?

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • Enabling openssl With PHP/nginx

    - by reefine
    I'm getting the following error when trying to connect to SMTP + SSL through PHP using nginx + PHP 5, Could not connect to smtp host 'ssl://smtp.gmail.com' (5) (Unable to find the socket transport "ssl" - did you forget to enable it when you configured PHP?) In phpinfo I see: OpenSSL support disabled (install ext/openssl) This leads me to believe I've installed OpenSSL incorrectly. I've read a bunch of places where I should uncomment the following line: extension = php_openssl.dll This line does not exist so I added it to the end of my php.ini to no avail. The php_openssl.dll file does not exist anywhere on my server.

    Read the article

  • VMware Workstation 7&8&9 does not generate /etc/vmware/network upon installation

    - by dash17291
    When I install VMware Workstation on Arch linux Virtual ethernet is not working. $ sudo tail /var/log/vnetlib Aug 28 22:20:33 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:33 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:33 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:33 VNLNetCfgUnload - Requested cache is not loaded Database file is not present. Failed to initialize Aug 28 22:20:41 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:41 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:41 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:41 VNLNetCfgUnload - Requested cache is not loaded Required modules compiled. Previously I have copied that file or directory (I don't remember) from a working installation, but now I need a real solution. It's strange for me, may be a hardware issue also because with Ubuntu the same thing happens on the same computer.

    Read the article

  • stsadm farm backup exits with ffffffff

    - by overbyte
    I have a SP2007 farm that uses stsadm through Scheduled Tasks to run farm backups. It always worked fine, however it ran for a couple of seconds one day and just exited with code ffffffff. Looked at Event Viewer, the Sharepoint logs themselves and nothing unusual happened at the time this job ran. No files were created so an spbackup.log doesn't exist. Searched the net for batch files and STSADM return codes but the error message doesn't even exist. Any other recommended place to look for issues like this?

    Read the article

  • Getting a Cross-Section from Two CSV Files

    - by Jonathan Sampson
    I have two CSV files that I am working with. One is massive, with about 200,000 rows. The other is much smaller, having about 12,000 rows. Both fit the same format of names, and email addresses (everything is legit here, no worries). Basically I'm trying to get only a subset of the second list by removing all values that presently exist in the larger file. So, List A has ~200k rows, and List B has ~12k. These lists overlap a bit, and I'd like to remove all entries from List B if they also exist in List A, leaving me with new and unique values only in List B. I've got a few tooks at my disposal that I can use. Open Office is loaded on this machine, along with MySQL (queries are alright). What's the easiest way to create a third CSV with the intersection of data?

    Read the article

  • Moving files with batch files from one pc to a server, to a another pc - worried about disk corruption

    - by AnchientAnt
    I use scheduled tasks that calls a batch file, that calls more batch files to move about three files from a pc, to a server, then to multiple other pcs. It all happens very quickly, as they are small files. Are there any pitfalls for how fast these transfers happen? I'm just mildly concerned about causing some disk corruption somehow. I use logic like 1. Call MapToPc if files exist then move file to folder on server. Disconnect 2. Call SendtoPCs If files exist (the files just moved to the server) then MapToPCs Move all files Disconnect All of this happens in about 2 secs or less. edit: this on windows 7, server 2003, xp respectively

    Read the article

  • Is there a "pattern" or a group that defines *rc files in *nix environments?

    - by Somebody still uses you MS-DOS
    I'm starting to use command line a little more, and I see there are a lot of ways to configure some config files in my $HOME. This is good, since you can customize it the way you really like. Unfortunately, for begginners, having too many options is a little confusing. For example, I created .bash_alias for some alias I'm using. I didn't even know this option existed, I'm used to simply edit .bashrc. Do exist a pattern, a "good practice", envisioning flexibility and modularity in terms of rc files structure? Do exist a standardization group for this, or every body just creates it's own configuration setup?

    Read the article

  • What processes would make the selling of a hard drive that previously held sensitive data justifiable? [closed]

    - by user12583188
    Possible Duplicate: Securely erasing all data from a hard drive In my personal collection are an increasing number of relatively new drives, only put on the shelf due to upgrades; in the past I have never sold hard drives with used machines for fear of having the encrypted password databases that have been stored on them compromised, but as their numbers increase I find myself more tempted to do so (due to the $$$ I know they're worth on the used market). What tools then exist to make the recovery of data from said drives difficult to the extent that selling them could be justified? Another way of saying this would be: what tools/method exist for making the attempts at recovery of any data previously stored on a certain drive impractical? I assume that it is always possible to recover data from a drive that is in working order. I assume also there are some methods for preventing recovery of data due a program called dban, and one particular feature in macOSX that deals with permanently deleting data from a disk.

    Read the article

  • De-duplicate Firefox bookmarks

    - by Zoredache
    What methods exist to de-duplicate Firefox bookmarks. As I search Google I find that there previously was a plugin called CheckPlaces, but that no longer seems to exist. Another popular suggestion seems to be AM-DeadLink, which I tried, but it completely trashed my bookmarks. (Fortunately I had a backup first, and yes I had closed Firefox first as instructed). I was trying to move all my youtube.com bookmarks into a folder. I tried doing a search, and then dragging the bookmarks into the folder. Apparently this creates a copy, instead of moving them as I expected. So now I have 3 of everything since I had tried a couple times.

    Read the article

  • Subdomain returns error when restarting Apache

    - by xXx
    I try to install a subdomain on my dedicated server. I made a new DNS rules to point my sub domain to the IP of my serv. After reading this Subdomain on apache i tried to add new rules on Apache : NameVirtualHost *:80 <VirtualHost *:80> ServerName tb.mysite.org DocumentRoot /home/mysite/wwww/tb/ <Directory "/home/mysite/wwww/tb/"> AllowOverride All Allow from all </Directory> </VirtualHost> Then i restart Apache but it returns sudo /etc/init.d/apache2 restart * Restarting web server apache2 Warning: DocumentRoot [/home/mysite/wwww/tb/] does not exist [Wed Jun 27 10:32:58 2012] [warn] NameVirtualHost *:80 has no VirtualHosts ... waiting Warning: DocumentRoot [/home/mysite/wwww/tb/] does not exist [Wed Jun 27 10:32:59 2012] [warn] NameVirtualHost *:80 has no VirtualHosts the tb/ folder is existing, don't why Apache can't find it... And it says that NameVirtualHost:80 has no VirtualHosts...

    Read the article

  • Is there a "pattern" or a group that defines *rcs files in *nix environments?

    - by Somebody still uses you MS-DOS
    I'm starting to use command line a little more, and I see there are a lot of ways to configure some config files in my $HOME. This is good, since you can customize it the way you really like. Unfortunately, for begginners, having too many options is a little confusing. For example, I created .bash_alias for some alias I'm using. I didn't even know this option existed, I'm used to simply edit .bashrc. Do exist a pattern, a "good practice", envisioning flexibility and modularity in terms of rc files structure? Do exist a standardization group for this, or every body just creates it's own configuration setup?

    Read the article

  • Putting two physical hard drives in a single 2.5" bay?

    - by dgw
    My crazy brain came up with this ridiculous idea. "Why can't I have a single 2.5" drive device that actually contains two independent hard drives? I want RAID 1 data mirroring and the security of having redundant drives. This is a thing that must exist!" To be clear, I am not asking if I can somehow shoehorn two 2.5" drives into a single bay, replace an optical drive with an additional HDD, or anything like that. What I have in mind is a single 2.5"-form-factor device that houses two independent hard drives, with separate housings (likely) and distinct controllers. Probably all they'd need to share is the power & SATA connections. Probably no such thing exists, but because I know there exist hard drives the physical size of a stack of postage stamps (roughly), I have to ask.

    Read the article

  • Linux command line based spam checker?

    - by anonymous-one
    Does a command line based spam checker exist? We have created a mailbox at a 3rd party, and unfortunately decided on spam checking 'disabled' in the initial setup. There is no way to re-enable spam checking, the mailbox must be delete (and thus all contents lost) and re-created. Does anything exist where we can pump in either: A) Subject + from + to + body + all other fields. OR B) Raw message dump (headers + body). And the command line will let us know weather the email is possibly spam? Thanks.

    Read the article

  • Windows Command queue?

    - by Stefano
    i'm thinking if does exist some kind of software that can put in a queue a bunch of windows commands... for example i can say to first copy some file somewhere, then rename those, then delete the old files, then edit one of them etc.... without waiting the effective execution of any of those passages.... this could be useful when copying big files that take a lot and i don't want to sit in front of the computer keeping the eyes on the progress bar... does exist anything like this?

    Read the article

  • How to create batch file that delete all the folders named `bin` or `obj` recursively?

    - by Nam Gi VU
    I have the need of deleting all bin & obj folders in a folder on my PC. So, I'm thinking of a batch file to do that but I'm not famaliar with batching file in Windows. Please help. [Edit] After discussion with user DMA57361, I got to the current solution (still having problem though, see our comments): Create a .bat file, and paste the below command: start for /d /r . %%d in (bin,obj) do @if exist "%%d" rd /s/q "%%d" OR start for /d /r . %%d in (bin,obj) do @if exist "%%d" rd /s "%%d" @DMA57361: When I run your script, I get the below error. Any idea?

    Read the article

  • Apache mod_rewrite not working properly on Mac OS X 10.6 (Snow Leopard)

    - by DashRantic
    Hello all, I'm trying to create a PHP website with clean URLs with Apache's mod_rewrite, using a .htaccess file. mod_rewrite seems to be working, however, it claims it cannot find files on my server that do exist. Just as a basic test, this is what my .htaccess file looks like at the moment--going to [mysite]/page should redirect to the index.php file: Options +FollowSymLinks RewriteEngine on RewriteRule ^page$ index.php Afaik, I have setup the .conf file appropriately as well: <Directory "/Users/myuser/Sites/"> Options Indexes MultiViews AllowOverride All Order allow,deny Allow from all </Directory> However, when I try accessing the URL setup via mod_rewrite ( localhost/~myuser/mysite/page ), I get this: Not Found The requested URL /Users/myuser/Sites/mysite/index.php was not found on this server. However, that file does exist, and that is the proper location! The site works fine otherwise, if I go to localhost/~myuser/mysite/index.php, everything works fine--minus any sort of clean URLs, of course. Has anyone seen this before/have any ideas as to what I'm doing wrong?

    Read the article

  • Upgrade 10g Osso to 11g OAM (Part 2)

    - by Pankaj Chandiramani
    This is part 2 of http://blogs.oracle.com/pankaj/2010/11/upgrade_10g_osso_to_11g_oam.html So last post we saw the overview of upgrading osso to oam11g . Now some more details on same . As we are using the co-existence feature , we have to install the OAM server and upgrade the existing OSSO 10g server to the OAM servers. OAM Upgrade Steps Overview Pre-Req : You already have a OAM 11g Installed Upgrade Step 1: Configure User Store & Make it Primary Upgrade Step 2: Create Policy Domain , this is dome by UA automatically Upgrade Step 3: Migrate Partners : This is done by running Upgrade Assistant Verify successful Upgrade Details on UA step : To Upgrade the existing OSSO 10g servers to OAM server , this is done by running the UA script in OAM , which copies over all the partner app details from osso to OAM 11g , run_ua.sh is the script name which will ask you to input the Policies.properties from SSO $OH/sso/config folder of osso 10g & other variables like db password . Some pointers Upgrading oso to Oam 11g , by default enables the coexistence mode on the OAM Server Front-end the OAM server with the same Load Balancer that is the front end of the OSSO 10g servers. Now, OAM and OSSO 10g servers are working in a co-exist mode. OAM 11g is made to understand 10g OSSO Token format and session handling capabilities so as to co-exist with 10g OSSO servers./li How to test ? Try to access the partner applications and verify that single sign on works. Also, verify that user does not have to login in if the user is already authenticated by either OAM or OSSO 10g server. Screen-shots & Troubleshooting tips to be followed .......

    Read the article

  • Connecting to MSP430 via /dev/ttyACM0

    - by speciousfool
    I'd like some suggestions about how to fix garbled serial output from a device connected on /dev/ttyACM0. Lately I've been working on a development project making use of TI's MSP430 microcontroller (specifically the eZ430-RF2560). Over on this thread you can see we've been testing some code and have found that the output of the microcontroller over serial is garbled. The btstack provides simple counter test program. When we run the program and look at the serial port output using PuTTY on Windows 7 we see: rfcomm_send_internal cid 117 doesn't exist! BTstack counter 26230 rfcomm_send_internal cid 117 doesn't exist! BTstack counter 26231 However if we connect from various Ubuntu clients we get something like: Stt.R. BTacn 0 BTacn 002BTacn 0 BTcct 04BTtacoe 5BTacun My current belief is that this is because the device is being detected by cdc_acm as a generic USB ACM device. Another thread about a similar microcontroller suggests that the device should use a specific usb serial driver. We've verified that the module is compiled on our system and did a "modprobe ti_usb_3410_5052" but this had no effect on cdc_acm. Here is the relevant section of the kernel's debug log: [ 2735.092987] usb 2-1.2: new full speed USB device number 5 using ehci_hcd [ 2735.213655] cdc_acm 2-1.2:1.0: This device cannot do calls on its own. It is not a modem. [ 2735.213669] cdc_acm 2-1.2:1.0: No union descriptor, testing for castrated device [ 2735.213720] cdc_acm 2-1.2:1.0: ttyACM0: USB ACM device [ 2745.241996] generic-usb 0003:0451:F432.0003: usb_submit_urb(ctrl) failed [ 2745.242023] generic-usb 0003:0451:F432.0003: timeout initializing reports [ 2745.242401] generic-usb 0003:0451:F432.0003: hiddev0,hidraw0: USB HID v1.01 Device [Texas Instruments Texas Instruments MSP-FET430UIF] on usb-0000:00:1d.0-1.2/input1 So, in summary, we'd like to figure out how to properly connect to this device. Also of use may be the appropriate place to file a bug report.

    Read the article

  • Ubuntu, User Accounts messed up

    - by Vor
    I need to fix Ubuntu Accounts some how but don't really see how it could be done. The problem is: files /etc/passwd and /etc/hostname and /etc/hosts where changed. /etc/passwd After John:x:1000:1000:John,,,:/home/serg:/bin/bash Befoure serg:x:1000:1000:John,,,:/home/serg:/bin/bash /etc/hosts After 127.0.0.1 localhost 127.0.1.1 John-The-Rippe Befoure 127.0.0.1 localhost 127.0.1.1 serg-Protege /etc/hostname After John-The-Ripper Befoure serg-PORTEGE-Z835 I was trying to simply changed this files but can not do this because permission denied. When I'm trying to login as a root I got this message: John@John-The-Ripper:~$ sudo -s [sudo] password for John: John is not in the sudoers file. This incident will be reported The file sudoers is empty: John@John-The-Ripper:~$ vi /etc/sudoers When I type users in cp: John@John-The-Ripper:~$ users John John When I type id, I got this: John@John-The-Ripper:~$ id uid=1000(John) gid=1000(serg) groups=1000(serg) This doesn't work eather: John@John-The-Ripper:~$ usermod -l John serg usermod: user 'serg' does not exist John@John-The-Ripper:~$ adduser serg adduser: Only root may add a user or group to the system. ater. Then I tried to go to the GRUB menu and from there log in as a root. I did this, but however When I tryed to create user serg, It gave me an error that group already exist. When I tried to change /etc/passwd it said 'permission denied' And this doens't do the trick: John@John-The-Ripper:~$ visudo visudo: /etc/sudoers: Permission denied visudo: /etc/sudoers: Permission denied Also The last thing I tried to do is to create a bootable USB and reinstall ubuntu, however I can not open USB-Creator because it asked me a root passwd. But it doesn't work. HELP ME PLEASE =)))

    Read the article

  • htaccess/cPanel 301 redirects not working for add-on domain

    - by Clemens
    I've already looked at many samples and tutorials how to set up those 301 redirects on Apache and can't figure out why only the second one is working: Options +FollowSymlinks RewriteEngine on #doesn't work: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^page-still-exists.htm$ "http://www.new.com/new-target-page.htm" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^page-does-no-longer-exist.htm$ "http://www.new.com/" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^folder/otherpage.htm$ "http://www.new.com/" [R=301,L] #works: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^/?$ "http://www.new.com/" [R=301,L] #doesn't work: RewriteCond %{HTTP_HOST} ^old.com$ [OR] RewriteCond %{HTTP_HOST} ^www.old.com$ RewriteRule ^somepage.htm$ "http://www.old.com/some-page.htm" [R=301,L] I have no idea why only the second one is working. The only difference I can see is, that in the second case the old page does no longer exist on the old domain. But whenever I want to redirect any still existing page from the old domain to the new domain the page on the old domain is still used. Any input is much appreciated because this is slowly driving me crazy :) EDIT: I added the complete htaccess file. EDIT 2: So I removed almost all redirects and currently my htaccess looks like this: Options +FollowSymlinks RewriteEngine on RewriteCond %{HTTP_HOST} ^old\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.old\.com$ RewriteRule ^(.*)$ "http\:\/\/www\.new\.com\/$1" [R=301,L] The only redirect that is working is the simple one from old.com to new.com. A redirect like old.com/page.htm to new.com or even new.com/page.htm is not working. And actually I really don't know where this redirect is actually coming from... Can a 301 really be so complicated?

    Read the article

  • How do I set up pairing email addresses?

    - by James A. Rosen
    Our team uses the Ruby gem hitch to manage pairing. You set it up with a group email address (e.g. [email protected]) and then tell it who is pairing: $ hitch james tiffany Hitch then sets your Git author configuration so that our commits look like commit 629dbd4739eaa91a720dd432c7a8e6e1a511cb2d Author: James and Tiffany <[email protected]> Date: Thu Oct 31 13:59:05 2013 -0700 Unfortunately, we've only been able to come up with two options: [email protected] doesn't exist. The downside is that if Travis CI tries to notify us that we broke the build, we don't see it. [email protected] does exist and forwards to all the developers. Now the downside is that everyone gets spammed with every broken build by every pair. We have too many possible pair to do any of the following: set up actual [email protected] email addresses or groups (n^2 email addresses) set up forwarding rules for [email protected] (n^2 forwarding rules) set up forwarding rules for [email protected] (n forwarding rules for each of n developers) Does anyone have a system that works for them?

    Read the article

  • HTG Explains: Do Non-Windows Platforms Like Mac, Android, iOS, and Linux Get Viruses?

    - by Chris Hoffman
    Viruses and other types of malware seem largely confined to Windows in the real world. Even on a Windows 8 PC, you can still get infected with malware. But how vulnerable are other operating systems to malware? When we say “viruses,” we’re actually talking about malware in general. There’s more to malware than just viruses, although the word virus is often used to talk about malware in general. Why Are All the Viruses For Windows? Not all of the malware out there is for Windows, but most of it is. We’ve tried to cover why Windows has the most viruses in the past. Windows’ popularity is definitely a big factor, but there are other reasons, too. Historically, Windows was never designed for security in the way that UNIX-like platforms were — and every popular operating system that’s not Windows is based on UNIX. Windows also has a culture of installing software by searching the web and downloading it from websites, whereas other platforms have app stores and Linux has centralized software installation from a secure source in the form of its package managers. Do Macs Get Viruses? The vast majority of malware is designed for Windows systems and Macs don’t get Windows malware. While Mac malware is much more rare, Macs are definitely not immune to malware. They can be infected by malware written specifically for Macs, and such malware does exist. At one point, over 650,000 Macs were infected with the Flashback Trojan. [Source] It infected Macs through the Java browser plugin, which is a security nightmare on every platform. Macs no longer include Java by default. Apple also has locked down Macs in other ways. Three things in particular help: Mac App Store: Rather than getting desktop programs from the web and possibly downloading malware, as inexperienced users might on Windows, they can get their applications from a secure place. It’s similar to a smartphone app store or even a Linux package manager. Gatekeeper: Current releases of Mac OS X use Gatekeeper, which only allows programs to run if they’re signed by an approved developer or if they’re from the Mac App Store. This can be disabled by geeks who need to run unsigned software, but it acts as additional protection for typical users. XProtect: Macs also have a built-in technology known as XProtect, or File Quarantine. This feature acts as a blacklist, preventing known-malicious programs from running. It functions similarly to Windows antivirus programs, but works in the background and checks applications you download. Mac malware isn’t coming out nearly as quick as Windows malware, so it’s easier for Apple to keep up. Macs are certainly not immune to all malware, and someone going out of their way to download pirated applications and disable security features may find themselves infected. But Macs are much less at risk of malware in the real world. Android is Vulnerable to Malware, Right? Android malware does exist and companies that produce Android security software would love to sell you their Android antivirus apps. But that isn’t the full picture. By default, Android devices are configured to only install apps from Google Play. They also benefit from antimalware scanning — Google Play itself scans apps for malware. You could disable this protection and go outside Google Play, getting apps from elsewhere (“sideloading”). Google will still help you if you do this, asking if you want to scan your sideloaded apps for malware when you try to install them. In China, where many, many Android devices are in use, there is no Google Play Store. Chinese Android users don’t benefit from Google’s antimalware scanning and have to get their apps from third-party app stores, which may contain infected copies of apps. The majority of Android malware comes from outside Google Play. The scary malware statistics you see primarily include users who get apps from outside Google Play, whether it’s pirating infected apps or acquiring them from untrustworthy app stores. As long as you get your apps from Google Play — or even another secure source, like the Amazon App Store — your Android phone or tablet should be secure. What About iPads and iPhones? Apple’s iOS operating system, used on its iPads, iPhones, and iPod Touches, is more locked down than even Macs and Android devices. iPad and iPhone users are forced to get their apps from Apple’s App Store. Apple is more demanding of developers than Google is — while anyone can upload an app to Google Play and have it available instantly while Google does some automated scanning, getting an app onto Apple’s App Store involves a manual review of that app by an Apple employee. The locked-down environment makes it much more difficult for malware to exist. Even if a malicious application could be installed, it wouldn’t be able to monitor what you typed into your browser and capture your online-banking information without exploiting a deeper system vulnerability. Of course, iOS devices aren’t perfect either. Researchers have proven it’s possible to create malicious apps and sneak them past the app store review process. [Source] However, if a malicious app was discovered, Apple could pull it from the store and immediately uninstall it from all devices. Google and Microsoft have this same ability with Android’s Google Play and Windows Store for new Windows 8-style apps. Does Linux Get Viruses? Malware authors don’t tend to target Linux desktops, as so few average users use them. Linux desktop users are more likely to be geeks that won’t fall for obvious tricks. As with Macs, Linux users get most of their programs from a single place — the package manager — rather than downloading them from websites. Linux also can’t run Windows software natively, so Windows viruses just can’t run. Linux desktop malware is extremely rare, but it does exist. The recent “Hand of Thief” Trojan supports a variety of Linux distributions and desktop environments, running in the background and stealing online banking information. It doesn’t have a good way if infecting Linux systems, though — you’d have to download it from a website or receive it as an email attachment and run the Trojan. [Source] This just confirms how important it is to only run trusted software on any platform, even supposedly secure ones. What About Chromebooks? Chromebooks are locked down laptops that only run the Chrome web browser and some bits around it. We’re not really aware of any form of Chrome OS malware. A Chromebook’s sandbox helps protect it against malware, but it also helps that Chromebooks aren’t very common yet. It would still be possible to infect a Chromebook, if only by tricking a user into installing a malicious browser extension from outside the Chrome web store. The malicious browser extension could run in the background, steal your passwords and online banking credentials, and send it over the web. Such malware could even run on Windows, Mac, and Linux versions of Chrome, but it would appear in the Extensions list, would require the appropriate permissions, and you’d have to agree to install it manually. And Windows RT? Microsoft’s Windows RT only runs desktop programs written by Microsoft. Users can only install “Windows 8-style apps” from the Windows Store. This means that Windows RT devices are as locked down as an iPad — an attacker would have to get a malicious app into the store and trick users into installing it or possibly find a security vulnerability that allowed them to bypass the protection. Malware is definitely at its worst on Windows. This would probably be true even if Windows had a shining security record and a history of being as secure as other operating systems, but you can definitely avoid a lot of malware just by not using Windows. Of course, no platform is a perfect malware-free environment. You should exercise some basic precautions everywhere. Even if malware was eliminated, we’d have to deal with social-engineering attacks like phishing emails asking for credit card numbers. Image Credit: stuartpilbrow on Flickr, Kansir on Flickr     

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >