Search Results

Search found 19525 results on 781 pages for 'say'.

Page 578/781 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • GA 8KNXP Rev1.0: 4GB installed, only 3.5 recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4GB. The specs from the manual say: Maximum memory support: 4GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • Uploading file > 1 MB on Django admin gives 400 Bad Request response.

    - by ayaz
    I have a small Django (1.2.x) project deployed on Apache (2.x) via mod_wsgi (2.x). In the admin, if I upload a file < 1MB, I can get it through; however, for a file, say, 1.2MB in size, I get a 400 response from the server with "Error 400" in the body only. I am wondering why this is happening. As far as I can see, there is no LimitRequestBody set in Apache configuration. I have tried uploading with several browsers including: Firefox, Chrome, and Safari. In the log file for Apache, there is apparently no entry for requests that gave the 400 error response. This is strange. I should point out that the scenario where this is happening is thus: The project in question is deployed on two identical Apache servers (completely identical setup) that are behind a load balancer. On my development setup, of course, the problem does not surface. Any help with this will be very much appreciated.

    Read the article

  • Why can't I renew my dynamic IP address?

    - by qwerty
    So, I'm going to explain this from the start. I've started a project with a friend of mine which includes a webspider, that crawls through all pages on a site and stores them in a DB. Since I've never done this before, I didn't think about the amount of requests I was actually sending to the site, and after a day or two I finally got my IP blocked. I need to be able to visit that site as it's very important to me. Not only for my project, but for other reasons too. (and if I'm able to renew my IP I'm going to set a delay on the crawler so I don't get blocked & DDOS the site) I have a dynamic IP address, at least that's what my router settings say. I've tried ipconfig /flushdns, ipconfig /release, restart computer. No result. I end up with the same IP address. I've also tried renewing it from the router, however, I think it uses the same method which isn't working. Is it possible that site has blocked my mac address? Can a site even access my mac address?

    Read the article

  • Set postfix to send email but not to receive them

    - by CodeShining
    I'm using Google Apps to handle personal email addresses for my domain name, and I set up the DNS as Google suggests. All works fine. Now since I need a SMTP to send emails from my e-commerce I installed Postfix on the server. It works fine when I send emails to any email address but it doesn't send to the same domain name, so let's say my domain is example.com, I set postfix using example.com, if I try to reset a password using [email protected] postfix doesn't send and instead reports on the mail.log Sep 20 01:09:52 ip-10-54-26-162 postfix/pickup[6809]: B09A3415D8: uid=33 from=<www-data> Sep 20 01:09:52 ip-10-54-26-162 postfix/cleanup[6854]: B09A3415D8: message-id=<20120920010952.B09A3415D8@ip-10-54-26-162.eu-west-1.compute.internal> Sep 20 01:09:52 ip-10-54-26-162 postfix/qmgr[30978]: B09A3415D8: from=<[email protected]>, size=4234, nrcpt=1 (queue active) Sep 20 01:09:52 ip-10-54-26-162 postfix/local[6856]: B09A3415D8: to=<[email protected]>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=5.1.1, status=bounced (unknown user: "myaccount") Of course it cannot find a local user "myaccount" since that account is on Google Apps... How can I tell Postfix to send the email and do not search for a local user?

    Read the article

  • Which HDD brand do you ..trust

    - by Shiki
    Okay it says its 'subjective' but I believe it's not. Basically I want to ask the community about your preference. Not really 'preference' but actual experience. Like if you never had a problem with Western Digital, then write that in an answer, or if there is one with WD, just vote it up. And so on. (Heard so many stories, experiences. I only had Samsung, Maxtor, WD, Seagate HDDs. Samsung died with bad blocks, had anomalies. Maxtor died so fast I couldn't even try it really and it's really hot, loud. Seagate is just as loud as a jet plane, and moderately hot. My WD (green) is quiet, really cool and somewhat fast. That's all I have about experiences. So I would say Western Digital in an answer (OR Hitachi. Never had one yet, but every expert I know says I should get one since they even had problems with WD but Hitachi seems to be ok. (My laptop comes with Hitachi hdd but I don't think its really relevant.)) Basically I mean desktop 7200RPM HDDs here. Well.. notebook HDDs are ok also, but no raptor/scsi/server ones. Hope you get what I meant and it won't get closed.

    Read the article

  • How can I find a list of all SSE instructions? What happens if a CPU doesn't support SSE?

    - by Blastcore
    So I've been reading about how processors work. Now I'm on the instructions (SSE, SSE2, etc) stuff. (Which is pretty interesting). I have lot of questions (I've been reading this stuff on Wikipedia): I've saw the names of some instructions that were added on SSE, however there's no explanation about any of them (Maybe SSE4? They're not even listed on Wikipedia). Where can I read about what they do? How do I know which of these instructions are being used? If we do know which are being used, let's say I'm doing a comparison, (This may be the most stupid question I've ever asked, I don't know about assembly, though) Is it possible to directly use the instruction on an assembly code? (I've been looking at this: http://asm.inightmare.org/opcodelst/index.php?op=CMP) How does the processor interpret the instructions? What would happen if I had a processor without any of the SSE instructions? (I suppose in the case we want to do a comparison, we wouldn't be able to, right?)

    Read the article

  • NAT vs public IP (and blocked ports)

    - by user1646166
    I have a problem with my ISP. They say that they don't block any ports and I have public IP, while I think these both statements are false. Before I talk to them again (which is really tough when my understanding of these terms is different than theirs) I would like to make some things clear. It seems like my computer is behind NAT (is it possible to have public IP and be behind NAT at the same moment?). When I check my IP, through some external server, and type that IP into browser I get a home page of some router (not mine). Isn't that a proof that my IP isn't public? Also, I have problems with making connections via some ports. E.g. when I'm trying to connect through some high port ( 1023) via SSH, it doesn't work. Is it possible that certain range of outgoing ports from my computer are blocked? Or is it simply because that my ssh client (PuTTY) can't receive incoming packets because of blocked incoming ports? To avoid some questions: it's not a problem with my router, I tried connecting my PC directly and it also didn't work, while having connected by 3G using phone with USB tethering, it does work. Thanks!

    Read the article

  • Can I make my PC backup and then sleep on demand with WHS?

    - by Simon
    I really hate the way that WHS backs up at a particular time in the morning. First of all I don't EVER want my computer turning on when I am not there. EVER. I have a Core-i7 laptop which literally could burn the house down quite easily if it were to turn on in a bag. I also don't ever want my PC to sleep unless I tell it to. I don't have hibernation or sleep enabled and this is the only way that WHS will sleep after a backup is complete. I know that Windows 7 has the ability to disable waking up when on battery power but it doesn't seem to work on my laptop. These are the possibilities (with wake timers left as default): 'Wake this computer for backup' ON - it turns on in my bag if i forget to turn it off - and stays on when the backup is complete. 'Wake this computer for backup' OFF - it backs up in the morning, but I need to leave the machine on all night. I say 'Backup Now' and then it backs up immediately. I can turn it off when its done if I'm still awake, but then that backup appears as 'locked' in the console and not 'automatically managed'. What I'd really like to do is : Click 'Backup and Sleep' and then go to bed. It will backup immediately and then sleep the PC. This backup must be 'automatically managed' and not appear as a 'locked backup' in my console Show me a confirmation that everything was backed up successfully (or not) when I turn it on. Is there any way to achieve this?

    Read the article

  • Windows memory logged on vs logged off

    - by Adi
    Let's say I power on my fresh installed Windows 7 x64 machine. After Windows boots up, there are a bunch of services being started in the background that start allocating memory. Then I enter my user/pass and Windows logs me in. Let's supose I don't do anythig else (I don't explicitely start any application) and I don't have any other app installed by me. So it's fresh install of my machine. My question is: how much memory is needed for all the UI & other stuff? Is it a good indicator to look into task manager and check all the processes started under my user name and sum up all the memory consumed by those processes to get the total amount of memory I am consuming just to stay logged on? Basically this is my question: how much memory is needed just to stay logged on? Now, if log off would all the memory be released back to the system so that the background services can benefit of? Also, I assume that there might be a different discussion for each Windows flavors (?)

    Read the article

  • Configure IIS Web Site for alternate Port and receive Access Permission error

    - by Andrew J. Brehm
    When I configure IIS to run a Web site on Port 1414, I get the following error: --------------------------- Internet Information Services (IIS) Manager --------------------------- The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) However, as according to netstat the port is not in use. Completely aside from IIS, I wrote a test program (just to open the port and test it): TcpListener tcpListener; tcpListener = new TcpListener(IPAddress.Any, port); try { tcpListener.Start(); Console.WriteLine("Press \"q\" key to quit."); ConsoleKeyInfo key; do { key = Console.ReadKey(); } while (key.KeyChar != 'q'); } catch (Exception ex) { Console.WriteLine(ex.Message); } tcpListener.Stop(); The result was an exception and the following ex.Message: An attempt was made to access a socket in a way forbidden by its access permissions The port was available but its "access permissions" are not allowing me access. This remains after several restarts. The port is not reserved or in use as far as I know and while IIS says it is in use, netstat and my test program say it is not and my test program receives the error that I am not allowed to access the port. The test program ran elevated. The IIS Site is running MQSeries, but the MQ listener also cannot start on port 1414 because of this issue. A quick search of my registry found nothing interesting for port 1414. What are socket access permissions and how can I correct mine to allow access?

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • Cannot log into Windows XP Embedded after changing computer name

    - by bignis
    Hi everyone, I purchased a tablet pc running Windows XP Embedded. The tablet was used in a medical clinic on a domain. For illustrative purposes, say the computer name was "COMPLEXCOMPUTERNAME". There was an administrator account, so I changed the password on account "COMPLEXCOMPUTERNAME\Administrator" to a blank password. I logged out and logged in successfully with the blank administrator password when the log-in dialog said "Log in to COMPLEXCOMPUTERNAME (this computer)". Next I renamed the computer from COMPLEXCOMPUTERNAME to SIMPLECOMPUTERNAME, which required a reboot. I did so, and I can't log in anymore. The log in screen still just says "Log in to COMPLEXCOMPUTERNAME (this computer)", but the account "COMPLEXCOMPUTERNAME\Administrator" no longer works. I suspect that this is because the computer has been renamed to SIMPLECOMPUTERNAME and it can no longer find the account. The "Log in to" dropdown can't be typed in, so I can't change the computer name Windows is trying to log into. I fear that I'm stuck. Is there a way I can get Windows to log into the computer name that I chose? Thanks! -Mike

    Read the article

  • Using psftp to upload and download files

    - by macha
    Hello I am trying to upload and download files from my desktop to my server. Now after some search I did download psftp. I used to use filezilla earlier, but I cannot install it on my desktop due to a few reasons. Since psftp (similar to putty) is just an executable for file transfer. So now after going through this link http://www.math.tamu.edu/~mpilant/math696/psftp.html. I understood that put and get are two commands I would use to download and upload files. Now when I logon to the server and say get filename, it actually is throwing back an error "local: unable to open filename". I tried that with other files too, and I end up getting the same error. The psftp.exe file is on my desktop. The process that I am using is I double click the .exe file open "servrname" cd /path/where/files/are get "filename" And I get this error "local: unable to open filename". Am I making a mistake or is it a problem with this executable?

    Read the article

  • How does Linux determine the SCSI address of a disk?

    - by Chris Sears
    Greetings, I'm working with RHEL 5.5 guest VMs under VMware ESX 4. When I configure the virtual disks in the VM hardware settings, each disk has a SCSI address in the format "N:M". For example, "1:3" would mean SCSI host number 1 and SCSI target ID 3. When I look at the disk info from the VM's BIOS or a Windows OS, the detected SCSI address info matches up with the virtual hardware settings. But under Linux, the SCSI address components don't match up, at least not completely or consistently. I've tried the three supported virtual SCSI and SAS drivers and they all seem to be "broken", but in different ways. Here's a list of the virtual hardware addresses vs what was detected under Linux with each of the drivers: Driver vHW Addr Linux Addr -------- -------- ---------- LSI SAS 0:0 0:0 LSI SAS 0:3 0:1 LSI SAS 0:6 0:2 LSI SCSI 1:1 2:1 LSI SCSI 1:4 2:4 LSI SCSI 1:7 2:7 pvSCSI 2:2 1:2 pvSCSI 2:5 1:5 pvSCSI 2:8 1:8 My main question is why does this happen under Linux? The next question is: how do I get it fixed or fix it myself? If I was going to guess, I'd say it's an issue with how the kernel is handing out the SCSI host number and how the Linux SCSI driver (included with VMware tools) is detecting the SCSI target number. Perhaps the order the drivers are loaded also has something to do with the issue. I'm guessing this would not involve udev, but I could be wrong. Any thoughts would be appreciated. Thanks! PS. My environment is VMware, but I don't need an answer for these drivers specifically. I imagine this might be a problem with any SCSI driver under Linux.

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • Host Primary Domain from a subfolder

    - by TandemAdam
    I am having a problem making a sub directory act as the public_html for my main domain, and getting a solution that works with that domains sub directories too. My hosting allows me to host multiple sites, which are all working great. I have set up a subfolder under my ~/public_html/ directory called /domains/, where I create a folder for each separate website. e.g. public_html domains websiteone websitetwo websitethree ... This keeps my sites nice and tidy. The only issue was getting my "main domain" to fit into this system. It seems my main domain, is somehow tied to my account (or to Apache, or something), so I can't change the "document root" of this domain. I can define the document roots for any other domains ("Addon Domains") that I add in cPanel no problem. But the main domain is different. I was told to edit the .htaccess file, to redirect the main domain to a subdirectory. This seemed to work great, and my site works fine on it's home/index page. The problem I'm having is that if I try to navigate my browser to say the images folder (just for example) of my main site, like this: www.yourmaindomain.com/images/ then it seems to ignore the redirect and shows the entire server directory in the url, like this: www.yourmaindomain.com/domains/yourmaindomain/images/ It still actually shows the correct "Index of /images" page, and show the list of all my images. Here is an example of my .htaccess file that I am using: RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteCond %{REQUEST_URI} !^/domains/yourmaindomain/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /domains/yourmaindomain/$1 RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteRule ^(/)?$ domains/yourmaindomain/index.html [L] Does this htaccess file look correct? I just need to make it so my main domain behaves like an addon domain, and it's subdirectories adhere to the redirect rules.

    Read the article

  • NTFS permissions weird inheriance (second take!)

    - by Wil
    A complete re write of my previous question, in a different context. Basically, the issue is that when I create a new user within a new group, the new user has various permissions over various folders. I have deleted the group "users" from this user object, and it is simply a member of the group "test". I have created a folder called c:\foo, when I go to effective permissions under the security tab, I can see that the user "lockdown" has various permissions. As far as I can see, there is nothing that should allow lockdown access. The moment I remove users from this list, it behaves as I would expect, which makes me believe that for some strange reason, the users group behaves like the everyone group and is controlled by the system. That being said, I cannot understand this as under the list, it is not there - and further to this, with the same permissions as the first picture, guest does not have access. This has stumped me and any help is appreciated! (Tested in Windows 2003 and 2008) edit - Should also say that if I go to Effective Permission for the group the user is in, there are no boxes checked, so it is somehow just the user that is getting the permissions from somewhere.

    Read the article

  • What does Apache's "Require all granted" really do?

    - by John Crawford
    I've just update my Apache server to Apache/2.4.6 which is running under Ubuntu 13.04. I used to have a vhost file that had the following: <Directory "/home/john/development/foobar/web"> AllowOverride All </Directory> But when I ran that I got a "Forbidden. You don't have permission to access /" After doing a little bit of googling I found out that to get my site working again I needed to add the following line "Require all granted" so that my vhost looked like this: <Directory "/home/john/development/foobar/web"> AllowOverride All Require all granted </Directory> I want to know if this is "safe" and does not bring in any security issues. I read on Apache's page that this "mimics the functionality the was previously provided by the 'Allow from all' and 'Deny from all' directives. This provider can take one of two arguments which are 'granted' or 'denied'. The following examples will grant or deny access to all requests." But it didn't say if this was a security issue of some sort or why we now have to do it when in the past you did not have to.

    Read the article

  • Permissions nightmare - tried all I know

    - by Ben
    Working on a new client's dev site, which is a wordpress install on a Plesk box. I have SSH root access, and FTP access through a separate account. What I've done so far Initially I couldn't make any changes to any files at all. The permissions on all the template files looked a little screwy (644), so I figured change them to allow group, and add myself to the group: CHMOD Recursive on the theme folder to set everything to 664 Quickly realised I'd broken it, set the folders to 755, kept files as 664 Ownership on all files is a mixture of root:root and 500:500 (there is no user nor group with the ID of 500 on the server). Added myself to the group 'root' so I could modify the files too The Problem This worked OK, in terms of being able to edit the existing files, so I began working. However, I can't upload to the directory, even having run CHOWN -R root:root templatefolder/ and being in the root group. I feel like I must be missing something obvious, and it's doing my head in. Questions: Files in the install owned by 500 with group 500 - I've looked in /etc/group and /etc/passwd and there is no user nor group with this ID. Is that left over from another developer's setup or the previous server (they moved recently)? Is being in the 'root' group enough, or do I need to own the theme folder as 'myftpuser' in order to upload and create new files? Like I say, I have edit access, so I got myself this far. I'm now questioning what to do next!

    Read the article

  • Build suggestion for mini itx server [on hold]

    - by Spyros
    I am looking to create a home server and since there does not seem to be a nice ready made one to purchase, i thought of building a mini itx one. I am not familiar with the best hardware/case to pick and therefore i thought i would ask for your experience. I will most probably install a simple linux distro, probably Debian and most probably without a desktop environment as well. Therefore, I am not looking for the best graphics card, though a decent one is preferred for sure. I would say that the most important thing is that the case/hardware play well with each other and the machine has decent specs ( i think something like 8gb RAM and 1TB hdd is most probably my most important things). I understand that this question can be closed as subjective, but it would be a great help for me if some of you guys who have been doing a similar build, can suggest something. I tried to find a stack exchange variant for the question, but it seems that this one more closely relates to the subject. But I will definitely understand it if people close it due to being subjective in nature. If you can maybe move it somewhere or so, please feel free to do so. Thank you for your time :)

    Read the article

  • Windows: How to make programs think they're not running in a terminal server session?

    - by sinni800
    I am using the program "SoftXPand 2011 Duo" by Miniframe on my Windows 7 PC. It makes two workstations out of one computer. It uses the terminal services built into Windows to create the additional session. I use two screens, two keyboards and two mice to create this "illusion" of two computers. It works quite well and I can even play two different 3D games on the two screens attached to this single machine (using a Radeon HD5770 and a Core i5 2500k with 8 Gbytes RAM). There are a few downsides to this. I just found about one that is hidden on the first look. The sessions you are in (even on the first workstation) will identify as a terminal server session! Now some programs will run with limited effects (graphical), and some won't run at all. This also resulted in some games not running at all. They just say "Cannot be run in a terminal server session" and exit. I have already proven that top modern games (DirectX 10, 11) run just as good as on the same machine without SoftXPand, so this is a pretty artificial limitation! So, can I somehow hack my current session so it doesn't look like a terminal server session anymore? I. E. #include <windows.h> #pragma comment(lib, "user32.lib") BOOL IsRemoteSession(void) { return GetSystemMetrics( SM_REMOTESESSION ); } Will return FALSE? (Not a programming question! Just an example how programs detect if they're in a terminal server session!)

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >