Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 725/1981 | < Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • More advanced 'Apple Automator' software?

    - by OrangeBox
    Is there any software similar to automator but more advanced? In our situation we have two files with the same name, one is a MOV the other XML. We want to use some of the metadata within the XML to rename both files. Then we want to re-arrange the contents of the XML file so that it is compatible with another piece of software we use (I think this is called mapping) Essentially some software that takes a bunch of variable from existing file and peforms file actions to them. I imagine this would be an easy task using applescript, but im wondering if there is a OSX application similar to Automator that can do the above? Questions are: Is there software that can do the above? Could Automator achieve this? What is the name of this process? If no such software exists, what would be the best kind of script to use? eg. Make an Apple Script, python script etc.

    Read the article

  • Changed folder contents not updating in Finder (OS X 10.8)

    - by speedofmac
    I've been having this problem for a number of days now. When I update the contents of a folder using terminal commands, by decompressing archives, or by using "Save As", the affected folder often fails to reflect these changes. Sometimes it takes quite a while for any files to be shown, even if the total size of the folder is fairly small (< 10 MB). I'm running an '09 MacBook Pro 13", so it isn't the newest system, but it certainly has enough oomph to display a list of files in a folder. Does anyone know what might be causing this?

    Read the article

  • Solaris Fibre Channel target - Configure QLogic QLA2340

    - by growse
    I'm currently trying to set up a small storage system as a fibre channel target. This is for testing, so I'm currently using Solaris (Nexenta) and a QLogic QLA2340 HBA. For some reason, the qlc and qlt drivers don't support the QLA2340, so I'm using the qla2300 driver from QLogic's website. I've also got the scli utility installed for configuration. The HBA is detected by the system. That said, it's not clear how I get from this point to a point where I have a ZFS volume being exposed as an FC target. I was originally following this guide (http://www.youtube.com/watch?v=yzEBd3l7Qn4) but it seems that without the qlc/qlt drivers, Sun's configuration tools won't work. Does that also imply that COMSTAR also won't work? What's the best way to expose an FC target with this setup? Most of the options I'm seeing in scli complain that the port state is LinkDown (it is, I've not plugged anything in yet). Do I have to have my FC client plugged up and working before I can configure the target? Apologies for the slight vagueness of the question, but I'm not overly familiar with the terminology.

    Read the article

  • Java 7 update 6 installation fails on Windows 7 when Chrome is default browser

    - by ali1234
    I am configuring a brand new Lenovo U410 system with Windows 7 Home Premium for a user. I received the system direct from the shop. As part of the configuration I installed Java using the online installer. This worked correctly. Later, due to a mistake I made, I needed to restore the system to factory default. The factory default FORMATS C:\ and puts back (supposedly) the exact factory configuration. However, after doing this, I was no longer able to install Java successfully using the same method I used before. Now, whenever I attempt to use the online Java installer, the following happens. First of all, a window always appears "Welcome to Java", "Downloading Java Installer...". After short time this window disappears and then one of three things happens: The very first time I do this after doing a factory reset, I get a Windows error report, which contains this information: Application Name: JavaSetup7u5.exe Application Version: 7.0.50.6 Application Timestamp: 4feacd84 Fault Module Name: JavaIC.dll Fault Module Version: 9.9.9.9 Fault Module Timestamp: 4f2343d6 Exception Offset: 000052cb Exception Code: c0000417 Exception Data: 00000000 OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 1033 Additional Information 1: 773c Additional Information 2: 773cd78cf06816f8246f359fa270f3bb Additional Information 3: f51a Additional Information 4: f51aaea7d22f36fa9e3a626b5a5cd1c3 2. Subsequent runs produce either this error message: "Error: Java(TM) installer - Downloaded file C:\Users\\AppData\Local\Temp\fx-runtime.exe is corrupt." or Nothing happens at all. I Believe this is a red herring. Running the installer again causes a different error because the files were downloaded and the installer crashed before it could clean up. This isn't the actual problem, as when this happens the installer deletes the downloaded files, and then when you run it for the third time, it downloads everything again and does the javaic.dll crash. I suspect the downloader is appending to the existing files or something, causing the corruption. I have tried all of the above as Administrator and as a normal user. I have tried reseting the system to factory defaults several times. I have tried downloading with Chrome and Internet Explorer 9. I have tried uninstalling all anti-virus software and disabling the windows firewall entirely. The only thing which makes a difference is running the installer in Windows XP compatibility mode, which allows the installation to complete. I know I can workaround this error by using the offline installer so please don't post that as an answer. I am looking for an explanation of the root cause. Additionally, if I use the offline installer, the updater does not work. The updater also does not work if I install in XP mode. The updater fails because it works by just downloading the newest online setup and running it. Also remember that the installers are digitally signed. The signitures verify correctly so there is no way in hell that this is caused by corrupted downloads. Some theories I have: The Java setup files on java.com actually changed in between the first successful install and my later attempts. Seems unlikely as none of the version numbers have changed. However, I have seen a couple of reports of this error which showed up in the past 24 hours. This looks like the most likely explanation right now: http://www.oracle.com/us/corporate/press/1735645 - Oracle released 7 update 6 two days ago. Careful inspection of the installers reveal that they are in fact attempting to download .6, not .5 as the download page claims. Not actually correct. Only the update tool tries to install 7u6. The online installer still tries 7u5. However, 7u6 being released two days ago is too much of a coincidence to ignore. Update: The 7u6 online installer is available from Oracle technetwork. It crashes in exactly the same way. The factory reset software uses GMT-8 and I am on GMT-1. As a result, after factory reset, any software which cares to check would think that the system was restored 7 hours in the future, due to Window's awful policy of storing local time in the system clock. This could be confusing a certificate check or similar. Update: I discovered that this does cause Windows Update to fail. The workaround, setting the clock back before starting factory reset, does not enable Java to install correctly. The factory reset image isn't really the same as what is installed in the main partition when you buy the system. Naughty Lenovo. The installer appears to crash while installing or displaying something to do with the Ask.com toolbar. That seems to be what javaic.dll does. Microsoft Tuesday was the 14th. Some update in that could be causing this. However, I'm factory reseting the machine every time, so unless the patches get slipstreamed into the recovery image, or there is some mechanism by which they get silently installed even if updates are disabled, then I don't see how this can be the cause. Major breakthrough: The default browser on Lenovo systems is Google Chrome. I noticed that the JavaIC.dll "sponsor check" actually does a check on your default browser in order to decide which sponsor ad to display. Normally that would get you the Ask toolbar on IE9. But that toolbar doesn't work on Chrome, and so the installer tries to display a different ad. The different ad is what causes the crash. Changing the default browser to IE9 allows the installer to run correctly. So this looks like a genuine bug in the sponsor ad code in the installer, caused by a combination of Google Chrome default browser and not being in the US. (Installer also checks your location using IP geolocation service and displays different ads based on that.)

    Read the article

  • OSX: Selecting default application for all unknown and different file types (extensions)

    - by Leo
    I work in cluster computing and am using Mac OS X 10.6. I send off hundreds of computing jobs a day, and each one comes back with with a different extension. For example, svmGeneSelect.o12345 which is the std output of my svmGeneSelect job which is job number 12345. I don't control the extensions. All files are plain text. I want OSX to open any file extension that it hasn't seen before with my favorite text editor when I click on it. Or even better set up file association defaults for extension patterns ie textEdit for extensions matching *.o*. I do NOT want to create file associations for individual files since this extension will only ever exist once, and I do not want to go through the process of selecting the application to use for each file. Thanks for any help you can offer.

    Read the article

  • Defragment an Exchange Volume

    - by IceMage
    The Scenario: I use a dedicated volume (RAID volume) to store all of my data for my Exchange 2007 server. Today, out of curiosity, I decided to check up on how fragmented the files on this data volume were. To my surprise, the answer is extremely. So, a three part question: First and Foremost, SHOULD I defragment this volume (after a full backup of course)? Be specific as to why not if I should not, or reasons I absolutely should if I should. Second, about how much time should I allow for during this maintenance period per gigabyte. The drives are all 7200 RPM SATA drives on a Hardware RAID 5 controller (Perc 5i/6i, can't remember), the files are extremely fragmented. (Over 5000 file fragments per gigabyte). Third, is there something wrong here? It seems to me like the drive shouldn't be this fragmented. Could something be configured incorrectly that could be causing this to happen?

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • Messages stuck in SMTP queue - Exchange 2003

    - by Diav
    I need your help people ;-) I have a problem with messages coming into our Exchange Server and ones going out through it. Basically, the messages are stuck in the SMTP queue. A message will come into the server, I can see it listed under "Exchange System Manager", but if you list the properties of the message queue it says something like 00:10 SMTP Message queued for local delivery 00:10 SMTP Message delivered locally to [email protected] 00:10 SMTP Message scheduled to retry local delivery 00:11 SMTP Message delivered locally to [email protected] 00:11 SMTP Message scheduled to retry local delivery etc etc For outgoing message list looks like this: 10:55 SMTP: Message Submitted to Advanced Queuing 10:55 SMTP: Started Message Submission to Advanced Queue 10:55 SMTP: Message Submitted to Categorizer 10:55 SMTP: Message Categorized and Queued for Routing 10:55 SMTP: Message Routed nad Queued for Remote Delivery And the end - since then status didn't change, message is in queue, I am forcing connection from time to time but without an effect. I checked connection with smarthost (used telnet for that) and everything seems to work correctly, so the problem is probably on exchange side. I am using Exchange Server 2003 running on Small Business Server 2003. I don't have any antivirus installed on server. Remaining free space on each partition is over 3Gb, on partition with data bases - it is over 12Gb. All was working good and without problems since 2005, problems started in half of this june - messages started going out and being stuck almost randomly (I don't see a pattern yet, some are going out, some are not, some are going after several hours). I don't know what to do, what to check more, so please, any ideas? Best regards, D. edit Priv1.edb has 14,5GB and priv1.stm 2,6GB - together those files have more than 16GB - can it be the reason? If yes, then what? Indeed, I haven't thought that it can have something in common with my problem, but several users reported recent problems with Outlook Web Access - they can log in, they see the list of their mails, but they can't see the content of their emails. Although when they are connecting with Outlook 2003/2007 - there is no such problem, only with OWA there is. edit2 So,.. It works now, and I have to admit that I am not really sure what the problem was (hope it won't come back). What have I done: Cleaned up some mailboxes to reduce size of them Dismounted Information Store Defragmentated data base files ( I used eseutil: c:\program files\exchsrvr\bin eseutil /d g:\data base\Exchsrvr\MDBDATA\priv1.edb ) Mounted Information Store back ..and before I managed to do anything else - my queue started moving, elements which were kept there already for days - started moving and after few minutes everything was sent, both, outside and locally. But: priv1.edb is still big (13 884 203 008), priv1.stm as well (2 447 384 576), so this is probably not the issue of size of the file. And if not this, so what was that? And if that was issue of size of the file, then soon it will repeat - is there something I can do to avoid it ?

    Read the article

  • Help Installing SQL Server 2008 Express Edition

    - by Jordan S
    Ok I am running Windows 7, 64 bit. I cleaned of SQL server 2005 completely off my system leaving only SQL Compact Edition. I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en and installed SQL Server 2008 Express Edition Service Pack 1. After the install, under my start bar menu all i have for SQL configuration tools are the Configuration Manager, Error and Usage Reporting and the Install Center. I don't have the SQL Managment Studio. So I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=08e52ac2-1d62-45f6-9a4a-4b76a8564a2b&displaylang=en and downloaded the SQL Server 2008 Management Studio Express but when I try to install it I get a warning says This program has known compatibility issues and that I need to Install SQL Server 2008 Service Pack 1. I thought that is what I installed. So, I tried to continue running the install but I then get an error message that says Invoke or BeginInvoke can not be called on a Form before it is opened... How can I check if Service pack 1 is installed or not? What should I do? Also I rebooted my system and checked for Windows Updates and it says that Windows it up to date.

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • What's the best way of handling permissions for apache2's user www-data in /var/www ?

    - by gyaresu
    Has anyone got a nice solution for handling files in /var/www/ ? We're running Name Based Virtual Hosts and the apache2 user is 'www-data' We've got two regular users & root. So when messing with files in /var/www ,rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question. How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments. Cheers.

    Read the article

  • Going from imagex/AIK to WDT

    - by lross1309
    I was looking learning Windows Deployment Toolkit. I have been using AIK to create answer files and sysprep to clean the referece computer and imagex to capture and apply images. I have some .wim files I need to keep (they also have an answer file in them). I was wondering, if I start using WDT, can I import these images and have WDT deploy them or do I have to start over with capturing the images using WDT? If I can import, how do you deploy them in the same manner that imagex would use the images internal answer file using WDT? Leon

    Read the article

  • Windows Media Sharing not 'always' being detected by PS3

    - by Ahmad
    I'm having a weird problem with Windows Media Sharing on Windows 7 .. I have the following hardware in my network: PC 1 --- My main PC --- runs Windows 7 Ultimate x64 PC 2 --- My backup PC --- runs Windows 7 Ultimate x32 PS3 PC 1 is my main PC which has all my data/media on it .. PC 2 is a backup PC I have, but I use it like once in 2 months .. It has nothing installed on it apart from some very very basic software ... Problem is, my PS3 always sees the media sharing service coming from PC 2, but it never sees the media sharing service coming from PC 1 initially .. Both PC 1 and PC 2 have the same media sharing configuration (All everything on all devices on all networks) ... But when I restart both PCs, the PS3 will only detect PC 2's media sharing service, not PC1 .... However here's the twist .. When PC 1 is restarted, and if I view my 'Network' on PC 2, I do see PC 1's Media Sharing Service, and I'm able to play from it too on PC 2 .. To get my PS3 to also see PC 1's media sharing service, I have to do either of the following 2 things: 1) Play something from PC 1's media sharing service on PC 2 ... The PS3 will then magically also detect PC 1's media sharing service .. 2) Go into the Services area on PC 1 and restart the 'Windows Media Player Network Sharing Service' ... After this, the PS3 also instantly starts to see PC 1's media sharing service .. Since my PS3 is like a month old and is properly detecting PC 1's media sharing service, I think the problem is somewhere in the configuration of PC 1's media sharing service ... Also, on PC 1 I have Norton Internet Security 2012 installed, but I've disabled it completely, and have also disabled Windows Firewall (from PC 1 only) .. Can someone shed some light onto this ?

    Read the article

  • Windows XP boot: black screen with cursor after BIOS screen

    - by Radio
    Here is a weird one, Got computer with Windows XP. It's getting stuck on a black screen with cursor blinking. What did I do: - Boot from installation CD (recovery option - command line): chkdsk C: /R copy D:\i386\ntdetect.com c:\ copy D:\i386\ntldr c:\ fixmbr fixboot Chkdsk showed 0 bad sectors and no problems during scan. dir on C:\ shows all directories and files in place (Windows, Program Files, Documents and Settings). BIOS shows correct boot drive. Still does not boot. Not sure what to think of. Please help. UPDATE: Just performed these steps: Backed up current disk C: (without MBR) using True Image to external hard drive Ran Windows XP clean installation with deleting all partitions and creating new one. Hard drive booted fine into Windows GUI installation!!! Then: Interrupted installation. Booted from True Image recovery CD and restored archive of disk C to an new partition. Same issue with black screen.

    Read the article

  • 2 nics. 2 Defaults Gateways

    - by andre.dias
    Here is my scenario: i have this server with 2 nics, each one with different IPs, connected to differents routers. Almost everything is configured whe way i need. Traffic coming from eth0 exits using eth0, traffic coming from eth1 exits using eth1. And there is a default gateway configured. $route: default IP 0.0.0.0 UG 0 0 0 eth0 With this configuration, the traffic generated in the server is going out using eth0 (lynx www.google.com for example). The problem is: the Internet link from eth0 went down today. The traffic coming from eth1 was ok...no problem. But the traffic generated in the server was a problem...the default gateway was out...no access do the Internet anymore (no more lynx www.google.com) So i added a new default gateway configuration, pointing to eth1. For 30 minutes i kept that way...2 default gateways, but just one was "working"...and everything was working just fine. But then i removed de eth0 gateway entry because, well, 2 default gateways is kind of weird. My question: is there any problem on keeping these 2 default gateways, one for each? So i don´t need to do nothing when one link go down again? $route: default IP1 0.0.0.0 UG 0 0 0 eth0 default IP2 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • DD-WRT router causing IP address conflicts across network

    - by r.tanner.f
    My DD-WRT router has lost its mind! I just set up two DD-WRT routers, one as a WAP (working fine) and one in Client Bridge (routed) mode (the problem). Not long after setup I started seeing IP address conflicts on other machines. The event log always points the finger at my Client Bridge router's MAC address. Neighbour table overflow The log on my router is flooded with Neighbour table overflow errors. These start a minute or two after boot. The network is rather large, with +200 IP addresses being used in this subnet. The other router shows no such errors. Mass ARP requests from 1.1.1.1 I'm also seeing constant ARP requests (with the problem router's MAC address) from 1.1.1.1. Seems like it's bugging everything on the network for its MAC address and then promptly forgetting it (or never receiving a response). Configuration: Model: Buffalo N600 Firmware: DD-WRT v24SP2-MULTI (03/21/11) Wireless Mode: Client Bridge (routed) I'm not sure what configuration details are relevant and I'd rather not have comments flooded, so just ping me in this chat if you want to know something. Why is my router stealing IP addresses and how can I stop it?

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Creating cookieless application on development machine with asp.net

    - by zaladane
    I am thinking about setting up a new domain to host static content on my website and have it cookieless just like Stackoverflow with their static domain. So before going ahead and buying the domain and setting it up I wanted to test it on my developement machine first under localhost (I have to mention that i am planning on having IIS running on my new domain for the static files). I therefore created a new application under IIS and disabled session state and forms authentication. When my main application needs resources like css, images and js , I use the path to the "static" application where they are hosted. The problem is that when I look at the request and the response for the requested files, they still have the session_id cookie defined as well as the asp.net authentication cookie. Is it at all possible to accomplish what i am trying to do on a development machine or do i have to just go ahead and purchase the new domain which hopefully with make things right? I tried to read about cookieless domain but can't figure out what i might be missing.

    Read the article

  • Can't make SELinux context types permanent with semanage

    - by Safado
    I created a new folder at /modevasive to hold my mod_evasive scripts and for the Log Directory. I'm trying to change the context type to httpd_sys_content_t so Apache can write to the folder. I did semanage fcontext -a -t "httpd_sys_content_t" /modevasive to change the context and then restorecon -v /modevasive to enable the change, but restorecon didn't do anything. So I used chcon to change it manually, did the restorecon to see what would happen and it changed it back to default_t. semanage fcontext -l gives: /modevasive/ all files system_u:object_r:httpd_sys_content_t:s0` And looking at /etc/selinux/targeted/contexts/files/file_contexts.local gives /modevasive/ system_u:object_r:httpd_sys_content_t:s0 So why does restorecon keep setting it back to default_t?

    Read the article

  • How can I get WAMP and a domain name to work on a non-standard port?

    - by David Murdoch
    I have read countless articles on setting up a domain on WAMP to listen on a port other than 80; none of them are working. I've got Windows Server 2008 (Standard) with IIS 7 installed and running on port 80 (and 443). I've got WAMP installed with the following configuration. Listen 81 ServerName sub.example.com:81 DocumentRoot "C:/Path/To/www" <Directory "C:/Path/To/www"> Options All MultiViews AllowOverride All # onlineoffline tag - don't remove Order Allow,Deny Allow from all </Directory> localhost:81 works with the above configuration but sub.example.com:81 does not. Just to make sure my firewall wasn't getting in the way I have disabled it completely. My sub.example.com domain is already pointing to my server and works on IIS on port 80. Also, if I disable IIS and change the Apache port from 81 to 80 it works. Yes, I am restarting Apache after each httpd.conf change. :-) I don't need any other domain (or sub domains [I don't even care about localhost]) configured which is why I'm not using a VirtualHost. Any ideas what is going on here? What could I be doing wrong? Update Changing Listen to 80 but keeping ServerName as sub.example.com:81 causes navigation to sub.example.com:80 to work; this just doesn't seem right to me. Could ServerName be ignoring the :port part somehow? netstat -a -n | find "TCP": >netstat -a -n | find "TCP" TCP 0.0.0.0:81 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:912 0.0.0.0:0 LISTENING ... TCP 127.0.0.1:81 127.0.0.1:49709 TIME_WAIT ...

    Read the article

  • What sort of attack URL is this?

    - by Asker
    I set up a website with my own custom PHP code. It appears that people from places like Ukraine are trying to hack it. They're trying a bunch of odd accesses, seemingly to detect what PHP files I've got. They've discovered that I have PHP files called mail.php and sendmail.php, for instance. They've tried a bunch of GET options like: http://mydomain.com/index.php?do=/user/register/ http://mydomain.com/index.php?app=core&module=global§ion=login http://mydomain.com/index.php?act=Login&CODE=00 I suppose these all pertain to something like LiveJournal? Here's what's odd, and the subject of my question. They're trying this URL: http://mydomain.com?3e3ea140 What kind of website is vulnerable to a 32-bit hex number?

    Read the article

  • BIOS corrupted? How to proceed? Acer Travelmate 290

    - by dtlussier
    I have a older ACER Travelmate 290 (manufactured in 2002 or 2003), which I recently tried to upgrade to Ubuntu 9.10. After doing the upgrade process it appeared as if I had a problem with my x server configuration, as on the first reboot post-install I heard the Ubuntu startup sound, but had a black screen. I thought I would then reboot again to drop down into text-mode to trouble shoot the x configuration problem. However, when I tried rebooting, something went wrong and since then when I start up the machine I get absolutely nothing except the first hardware check (i.e. HDD light flashes, CD/DVD drive spins, etc.). Other than that the screen remains totally black and I have no HDD or processor activity at all. I have tried restarting it a number of time holding down all kinds of key combinations to try and coax it into the BIOS (if possible) with no luck. I have also tried putting in both a live Linux disc and a Windows install disk without any luck. With a disk in the drive it will spin for a few seconds and then stop. All this has lead me to suspect that the BIOS is somehow corrupt (not sure about the right terminology). I have tried putting a new BIOS image and installer program downloaded from ACER on a USB key to see if it will run when I start up the machine, but no luck. I'm not sure if this method of interacting/updating/flashing the BIOS will work outside of Windows/DOS as both OSs are mentioned kind of ambiguously in the documentation. I have also taken the laptop case apart and inspected the various cards and cannot find any obviously burned out components. I'm not sure how to proceed at this point in terms of components to try, or how to try and load a new BIOS image onto the board. Any advice here would be great, especially from those with experience with this particular line of laptops. Thanks!

    Read the article

  • Failed to Upload a File

    - by CrazyNick
    User 'X' is the site-collection owner. He tries to upload a 500kb file into a document library, got the error "The server has aborted your upload. The files selected may exceed the server's upload size limit. If you are transfering a large group of files, try uploading fewer at a time." however web-application owners are able to upload the file. what would be the issue, any thoughts? Upload size limit for a file – 5 MB Site Quota template set – 50 MB Used Site Quota – 10 MB

    Read the article

< Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >