Search Results

Search found 13652 results on 547 pages for 'full'.

Page 407/547 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • Copied a file with winscp; only winscp can see it

    - by nilbus
    I recently copied a 25.5GB file from another machine using WinSCP. I copied it to C:\beth.tar.gz, and WinSCP can still see the file. However no other app (including Explorer) can see the file. What might cause this, and how can I fix it? The details that might or might not matter WinSCP shows the size of the file (C:\beth.tar.gz) correctly as 27,460,124,080 bytes, which matches the filesize on the remote host Neither explorer, cmd (command line prompt w/ dir C:\), the 7Zip archive program, nor any other File Open dialog can see the beth.tar.gz file under C:\ I have configured Explorer to show hidden files I can move the file to other directories using WinSCP If I try to move the file to Users/, UAC prompts me for administrative rights, which I grant, and I get this error: Could not find this item The item is no longer located in C:\ When I try to transfer the file back to the remote host in a new directory, the transfer starts successfully and transfers data The transfer had about 30 minutes remaining when I left it for the night The morning after the file transfer, I was greeted with a message saying that the connection to the server had been lost. I don't think this is relevant, since I did not tell it to disconnect after the file was done transferring, and it likely disconnected after the file transfer finished. I'm using an old version of WinSCP - v4.1.8 from 2008 I can view the file properties in WinSCP: Type of file: 7zip (.gz) Location: C:\ Attributes: none (Ready-only, Hidden, Archive, or Ready for indexing) Security: SYSTEM, my user, and Administrators group have full permissions - everything other than "special permissions" is checked under Allow for all 3 users/groups (my user, Administrators, SYSTEM) What's going on?!

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • Static IP addressing issue in Ubuntu on BeagleBoneBlack Rev C

    - by Stringfellow
    I have my BBB configured to use a static IP address using the following in the file /etc/network/interfaces: allow-hotplug eth0 iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 This seems to work ok on boot, but when the ethernet cable is unplugged and then plugged back in, I lose the IP address. Any ideas what's going on here? Another weird symptom: If I boot the BBB with the network cable unplugged, but the switch it's plugged into off, I'll get my static IP. But, when I turn the switch on, I'll get a DHCP-assigned address. This is even though I have it configured with a static IP address. One last thing. If I ifdown etho, the interface will be gone when I do an ifconfig. If I wait a few seconds, though, and then re-run ifconfig, it will reappear, without an IP address. (Before I disabled IPv6, I used to get a IPv4 DHCP address in this case... weird). When that happens, I get a message like this in /var/log/messages: Apr 23 20:32:06 beaglebone kernel: [ 737.170172] libphy: 4a101000.mdio:00 - Link is Up - 100/Full Apr 23 20:32:06 beaglebone kernel: [ 737.170304] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Here's my uname -a: root@beaglebone:/etc# uname -a Linux beaglebone 3.8.13-bone47 #1 SMP Fri Apr 11 01:36:09 UTC 2014 armv7l GNU/Linux Any ideas what's going on here?

    Read the article

  • How do I fully share a Hard Drive on my Local Network?

    - by GingerLee
    I have 4 computers connected to a router (DD-WRT) My main PC is Windows 7 (Home Premium). This machine has 2 Hard Disks: HD1 is used for my OS and the other (HD2) is used to store files. My 3 other machines are 1. Ubuntu Destop that I use to learn about linux, 2. A Mac OSX laptop, and 3. A netbook running windows 7. How do I easily share HD2 with my other machines? I would like all my machines to have full access & permissions to HD2 however I would like to RESTRICT access to only PCs that are connected to my router (either via LAN and WiFi) --- btw, I know this is not very secure due to WiFi vulnerability , however, I currently MAC address restrict WiFi connections my router. Extra Info: I have already tried to use the Windows Folder Sharing feature: i.e. I right click over the icon of HD2, and click on the Sharing Tab, but in sub-window labeled "Network File and Folder Sharing", the "Share" button is grayed out. I can click on "Advanced Shared" but that just takes me to a screen in which I have to set certain permissions. What is not clear to me is: How do I set a criteria that shares HD2 with all computer connected to my router?

    Read the article

  • Are there any benchmarks showing difference between hardware virtualisation enabled/disabled?

    - by Wil
    I have a 13" sub-laptop/large-netbook, it has an AMD Athlon Neo X2 L335, and I chose this one because it supports hardware virtualisation. In the end, I hardly do any virtualisation on it, however, when I do... it is fast. To my shock, I went in to the BIOS and saw that virtualisation was disabled! I turned this on and, I see no speed difference.... or at least none that I can tell. I do not have time to do a full set of benchmarks - and I run quite a bit of software on the host, so it wouldn't be scientific. I have searched quite a few places and I just can not find any benchmarks showing the difference of virtualisation bit enabled/disabled on the same hardware. Does anyone have any benchmarks they have seen that they can share? In addition, I know there was an uproar a while ago as Sony disable the hardware virtualisation on some models and only offer it in their higher models as a premium feature, however, apart from forcing an up-sell, are there any benefits to having it disabled e.g. battery/heat? I just can't find any information and can't work out why it would be disabled by default. Edit--- To add, The only thing I can find is that without it, you can not perform x64 virtualisation as fast. This is the only down side I can find. However, if this is the only difference, then I am still interested in the second part of the question - why offer the option to disable it?

    Read the article

  • Outbound ports to allow through firewall

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support commonly required resources: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working. I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • How to stop Vista from auto changing video resolution?

    - by bialix
    I have new Acer Aspire Revo R3600 computer with Vista pre-installed. The computer has NVidia video adapter. While connecting 17" LCD monitor (LG L1742S) via VGA cable it works fine, and I can change the resolution of the display from max 1920*1024 down to some other value, and after reboot the settings are restored correctly. But when I'm connecting bigger full HD 1920*1080 display (LG E2250) via VGA cable then every boot I have the same problem: I see boot progress window, then I see MS logo, then I see welcome screen then I start to see desktop and suddenly monitor switch off and show me the message about unsupported frequency of input signal As I understand Vista tries to auto-change resolution and sets wrong parameters. I've tried to boot into safe mode and into low-resolution mode, every time I have the same problem: Vista boot-up and suddenly monitor stops working. I've tried to connect this monitor to notebook with Windows XP and has no problem to work with this display on its native resolution. How can I disable this display resolution auto-changer in Vista? Or maybe there is another workaround?

    Read the article

  • Mac always boots with incorrect display gamma (for years now including Lion)

    - by Alex Wayne
    I think somewhere, something got installed but I have no idea what or how to fix it :( Basically, my old MacBook Pro running 10.5 Leopard had a problem where on boot it would show everything on the screen in a very sort of crunched color space. Everything below 15% white would just be pure black, everything above 85% white would be pure white and all colors look to be a touch more saturated. It's garish. To fix it, I found that I could boot into almost any fullscreen 3D game. When the game launches, the colors would still be off, but when I then quite the game and return the desktop everything is normal again. I've noticed Blizzard games work most reliably for this (World of Warcraft or Starcraft2). This problem has followed me through the years. When I upgraded to an iMac I migrated everything over to it, and the issue now happens on the iMac too. I then got a new MacBook Pro for work and migrated my iMac over to that, and it has the problem too. I had thought that it was an OS bug, but upgrading to 10.6 Snow Leopard didn't fix it and neither did 10.7 Lion. Furthermore I can't find any reference on any forum or help site where anyone else has this problem. If anyone has any idea what processes or settings or apps I should look at to figure out why this is happening I should would appreciate it! It looks sort of irresponsible when I open my laptop in the office to work and then boot up Starcraft 2 full screen...

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • How to create a WHM/cPanel account, without creating a new sub-domain?

    - by Cyclops
    I have a basic VPS (full root access), with WHM/cPanel, and am learning the ropes. I'm trying to create a new account for an existing domain (mysite.com), and so far WHM won't let me - it either wants a sub-domain or fake domain, but won't allow two accounts for one domain. In the beginning, there was only the root account, and it wouldn't let me login to cPanel - a quick chat with tech support, and I am informed that I need to create a second account, which I did. So now I have an account, call it ns1me, for domain mysite.com. Now I want to create a django account. I go through the same process, but WHM won't allow me to use mysite.com as the domain for django. The docs recommend a sub-domain, so I fill the box in with django.mysite.com. I then realize that has actually created a sub-domain - going to django.mysite.com shows me its home directory, along with helpful information about what version of Apache, Python, and other mods its running (thanks, Apache). I really don't want a sub-domain, so that's out. Another chat with tech support, and they recommend a fake domain name, as it won't create anything. Sure enough, using a domain of djangomysite.com works, and WHM allows me to create a django account. But of course, I can't send email to [email protected] (where I could to [email protected]). What I want, is to be able to create a second account, associated with mysite.com (so I can run cPanel logged in as django, send email to [email protected], etc) - without creating a whole new sub-domain, or fake domain.

    Read the article

  • mdadm: Replacing array with entirely new drives

    - by hellfur
    I have a server with three 500GB drives, with most of my data in a RAID5 configuration spanning the three of them. I just purchased and installed four 1TB drives, and the intention is to move off of the old drives and onto the new ones. I have enough SATA ports and power connectors to power all seven of my drives at once, so I've kept the old RAID running while I figure out what to do with the new drives. My question is: Should I create a whole new array on the 1TB drives, then move everything over and reconfigure linux to boot from the new md arrays? Or should I just expand the array, swapping out each of the three 500GBs with the 1TB, then adding the final drive? I've read up on the mdadm extending drive setup, and it makes sense, but I imagine I would use one of the drives as a full backup while I move things over, then add that drive back into the array once things are up and running on three of the 1TB drives, so there's some complication in going that route as well... I'm just not sure which is safer/recommended.

    Read the article

  • How can I extract data from Toshiba Satellite with a dead Windows installation?

    - by msanford
    I've got a Toshiba Satellite (unknown model number but bought early 2010) running Windows Vista which throws a kernel error on boot. We don't have the restore/recovery CD any more to restore the Windows partition. I have managed to boot to a Live CD version of Ubuntu 10.10 and have mounted the internal hard drive (which takes nearly 8 minutes). I suspect that the hard drive is malfunctioning, however, because copy tasks of even 30 megs of data to an attached and mounted USB flash drive takes over an hour, and some files are mysteriously inaccessible (not a permissions issue). When browsing folders, it takes many minutes to populate the folder window even with a single tiny file. During the copy tasks, the hard disk sounds like it tries to sleep several times in rapid succession, then continues accessing, it sounds, at full throughput. I initially tried using scp (from the shell) to copy data but I encountered the same local problems. I don't know the S.M.A.R.T. status of the hard disk, either. Is there a more effective way of going about recovering the data on the internal disk, assuming that I can't use a recovery CD and am too cheap to bring it in (for now, at least)?

    Read the article

  • Windows takes a very long time to shut down even in safe mode

    - by user1526247
    On Windows 7 the computer freezes for about 5 minutes once it gets to "Shutting down...". I can't remember when it started happening. I just lived with it for a while. The first thing I tried was a full scan using Microsoft Security Essentials. This did not solve the problem. I then went into msconfig and turned off everything I could get away with in the startup and services tabs. This did not solve the problem. I then uninstalled every program on this computer save the most basic programs. This did not solve the problem (did not uninstall drivers or catalyst). I then went through and turned off every single service and did a reboot. This did not solve the problem. I then booted into safe mode and just tried shutting it down. The problem even happens in safe mode. I have tried examining the event logs but with no success. They just say things like "blah blah has entered the stopped state" with no real clues about what program is causing me all this grief. *it may be worth noting that Ubuntu is installed on the same computer and the ubuntu boot loader is the one being used.

    Read the article

  • Why does my router log crazy amounts of blocked traffic on port 1701?

    - by Vlad Seghete
    I have a 2701HGV-B 2Wire modem and router (AT&T). The log is basically full with entries similar to the following with a time between a fifth and a third of a second between entries: src=86.156.7.170 dst=xxx.xxx.xxx.38 ipprot=17 sport=6882 dport=1701 Unknown inbound session stopped src=58.176.22.252 dst=xxx.xxx.xxx.38 ipprot=17 sport=21573 dport=1701 Unknown inbound session stopped src=91.221.6.250 dst=xxx.xxx.xxx.38 ipprot=17 sport=25902 dport=1701 Unknown inbound session stopped ... where the source IP will be different for every entry. The entries accumulate constantly, every single second that the router is on several of them appear in the log. The destination is the WAN address for my router. I understand that this is somehow related to VNCs, but I don't know enough to figure out why my router is getting bombarded with requests for a VNC session. Is there anything fishy going on or is this normal? If it is normal, how do I keep these entries from spamming my log files? Since there's about two or three of them every second, everything else gets drowned out.

    Read the article

  • Virtual Server HDD shrinks without apparent reason

    - by Christian
    We have a virtual hosted Linux server, and in the last few months every now and then the HDD shrinks from 400GB down to the exact byte count that is in use. All existing data can be downloaded and displayed without a problem, but we can't upload or edit any files because of the "full" hard drive. Here is a screenshot, where "size" should be 400GB: This has happened twice before, and again today. The last times, when I reported the issue to the host, they said "that isn't possible, you must be doing it wrong", but soon after the call, the problem vanished without us doing anything, so I suppose that they have some kind of problem they're not willing to admit. Even after the fact, they acted like nothing was wrong and wrote me a mail in which they explained that I can use "df -h" to view available disk space (well duh, how do you think I noticed this particular issue?). Questions about if and what they had done were ignored. It has happened around the 25th to 28th of the month, so I suspect that they might have a cronjob running every 30 days or so which wreaks havoc with some VM configs. I just want to understand the problem, but the host support hasn't been very helpful in that regard. I have tried Googling the issue, but any combination of search terms I can come up with just gives me tutorials on how to change HDD size in a virtual machine. a) What could be the cause of shrinking HDD size in a Ubuntu 12.04.3 LTS server? Could there be anything in our virtual machine or is it more likely to be an issue with the vm host? b) Can I do anything about it without needing to contact the host's support? c) Is there anyway I can prevent this from happening at all?

    Read the article

  • IIS_IUSRS cannot access files uploaded and created by Network Service - error 401.3

    - by Max
    Let me rephrase my question as I investigated further: The problem: I have a php script that is used to upload images on my windows webserver 2008. The files are created in the correct directory. The are created and owned by the user Network Service. Network Service has full access to the uploaded file. As soon as I try to access the uploaded file (mostly an image) via HTTP, I get an 401.3 not authorized error. Now, if I right-click on the not accessible image and grant IIS_IUSRS group read permissions via the security tab, the image can be accessed! By default IIS_IUSRS has NO access at all for the uploaded file. The directory containing the image files has the correct access rights set. But each file that is new uploaded to the directory is permitted for IIS_IUSRS. The question: How can I grant IIS_IUSRS by default access to the newly uploaded file? The appPool of the website has its identity set to its default, I also tried setting it to "networkIdentity" or so, but that did not work either.

    Read the article

  • How and where do you manage your domain names?

    - by Saif Bechan
    In the past several years doing web development I often times needed to buy new domain names. I changed registrars a lot also so over the years I have multiple domain names scattered over different registrars all over the world. Now I want to bring a little structure into my business, and I am at the point that I want to be able to have easy control over my domain names in a convenient way. Does anyone have an idea on what the best way is to give structure on this. I have made some suggestions maybe you can comment on them for me. 1) Just leave it as it is I can leave everything as it is. To make adjustments I have to log into different panels, and for some registrars I have to email the changes. 2) Transfer all the domains to one registar This will cost a lot, about 10 usd per domain name. But if I can find a registar where I have full control over DNS this is worth looking at. Can you give me some comments on how you are doing things now. Maybe also which registrar you prefer on doing things.

    Read the article

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • FDE / SSD - partition and leave some unencrypted?

    - by Web Design Hero
    Just bought a used beast of a desktop pc. The system drive is setup as a Raid 0 SSD (Intel 510 SSD Drives) with 128 each. I will probably not have to many programs beyond office and maybe Adobe CS if I spring for it, I will be keeping big data on a regular hdd. My question is about setting up TrueCrypt with my configuration. I have not previously done full disk encryption, but I feel that its probably a good idea. I have done some speed tests using file containers on the hdd and the sdd with truecrypt. While there is a huge hit with the SSDs and Truecrypt, it still outperforms the hdd on its own by a good margin, so I think i will be okay for my needs with truecrypt. I have seen in a few places that they recommend partitioning the drive and leavign some of the SSD not inside truecrypt, does this really make a difference? If so, how much should I leave? Will there be any issue in the Raid0 configuration? I am not really concerned about all the wear leveling issue, rather loose data and be secure, but since I don't need all that space neccesarily, I would like to optimize my setup for security and speed.

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >