Hi
I run as admin on my machine and want to run an executable as non admin user from the command prompt (without logging off).
I'm running on windows 7 64 bit OS.
Is possible?
Thanks.
Relevant configuration:
location /myapp {
root /home/me/myapp/www;
try_files $uri $uri/ /myapp/index.php?url=$uri&$args;
location ~ \.php {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
}
}
I absolutely have a file foo.html in /home/me/myapp/www but when I browse to /myapp/foo.html it is handled by PHP, the final fallback in the try_files list.
Why is this happening?
How can I enable non-admin users to run a certain application (in my case, a script) with admin permissions on Windows XP?
This would be similar to the setuid bit on *nix.
Dear ladies and sirs.
I have a need to create a self signed certificate non interactively. Unfortunately, the only tool that I know of (makecert) is interactive - it uses GUI to ask for a password.
My OS is Windows (from XP to 2008).
The only thing close that I managed to find is http://www.codeproject.com/Tips/125982/How-to-run-Makecert-without-password-window.aspx, however, it is still not good.
Any ideas?
OS:WinXP
Say I got some files in Chinese and some files in Korean. And in windows 'Region and Language Options', I set language for non-unicode program = Chinese. Is there any way that I can read some Korean text file in text editor easily without using Microsoft Word?
I need an environment that can support multiple unicode easily, I need to read Chinese, Japanese, Korean in text editor (Ultraeditor, notepad++) and terminal clients like SecureCRT. Please advise, thanks.
I'm looking for a productivity/operations management web service (ideally, free) that is aimed towards non-profits. Some of the features that I'm looking for include:
Indexing and sorting of contacts
Organizing milestones into a tree hierarchy
Tagging of documents for indexing and searching
Any ideas? Thanks!
This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here.
I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts.
But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder).
I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..."
I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further.
Any help is greatly appreciated, thanks.
I forgot to mention that we have several other sites up and running and completely accessible.
Postfix serves for my virtual domains and works fine.
But for one of my domains:
- it bounces mails targeted at [email protected]
- it rejects mails targeted at [email protected]
problem is, [email protected] does not exist either.
here is my postconf
why does it bounce [email protected], and reject other non existing mails?
Thanks.
I am currently attempting to transfer over 1 million files from one server to another. Using wget, it seems to be extremely slow, probably because it starts a new transfer after the previous one has been completed.
Question: Is there a faster non-blocking (asynchronous) way to do the transfer? I do not have enough space on the first server to compress the files into tar.gz and transferring them over. Thanks!
Hi,
I am researching a bit on virtualization for my class presentation.
I am not clear about the terms - "virtualizable instructions" and "non-virtualizable instructions".
Could someone here please explain that ?
Anyone know if it is possible to get a non-domain Server to pick-up it's updates from a domain included WSUS server?
Just thinking about Hyper-V host Servers, in a single server environment clearly this cannot be part of the domain because at the time the VM Host boots the Domain Controllers is not available. However is there any way to make this Hyper-V Host collect it's updates from the WSUS server.
I have a .bkf backup file, created by the Backup utility that Microsoft provides with Windows XP. Is there a way to read the contents of the file using a non-Microsoft OS, preferably Mac OS X or Linux?
I'm currently running the Apple SUS on a Mac OS X Server in a small office environment. It works well for Apple updates, but I'm still stuck with either manually downloading and installing Adobe/Microsoft updates on each computer or running them through a Squid cache, with the blind faith that Squid will keep the files I actually want to stay cached.
What is the best way to cache updates locally for applications like the Adobe Updater or Microsoft AutoUpdate? Ideally cached in such a way that I can tell which files I do or do not have cached. It would also be nice to be able to cache things for other software like Firefox and Sparkle-enabled apps, but these are usually small enough to ignore.
I am facing a company that have a fairly recent Microsoft Dynamics NAV (C/Side) setup that comes with a non-SQL storage system called the native database server. I would need to be remotely connect to this database, and perform what would equate to SQL queries with very modest needs (no join, no complex filtering).
I am rather ignorant of this technology, does someone knows to how make remote queries to this ERP?
In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen.
In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it.
The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators".
For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu.
Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A".
Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?
I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs):
eth0
\- br0 - 1.1.1.2
|- [VM 1's eth0]
| |- 1.1.1.3
| \- 1.1.1.4
\- [VM 2's eth0]
\- 1.1.1.5
My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed).
Thanks!
I have an old program that kind of depends on older dynamic libraries. They tend to get upgraded easily with distro's updates. I figured that there would be a script with using ldd that would gather the libs needed and create one bigger, statically linked application that wouldn't break so easily. If I could do this, alot of older KDE libraries could be removed from my system and easen my life. Thanks!
There's a bucket into which some users may write their data for backup purposes.
They use s3cmd to put new files into their bucket.
I'd like to enforce a non-destruction policy on these buckets - meaning, it should be impossible for users to destroy data, they should only be able to add data.
How can I create a bucket policy that only lets a certain user put a file if it doesn't already exist, and doesn't let him do anything else with the bucket.
In my nginx configuration, I have the following:
location /admin/ {
alias /usr/share/php/wtlib_4/apps/admin/;
location ~* .*\.php$ {
try_files $uri $uri/ @php_admin;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ {
expires 7d;
access_log off;
}
}
location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ {
alias /usr/share/php/wtlib_4/modules/$1/admin/$2;
}
location ~ ^/admin/modules/([^/]+)(.*)$ {
try_files $uri @php_admin_modules;
}
location @php_admin {
if ($fastcgi_script_name ~ /admin(/.*\.php)$) {
set $valid_fastcgi_script_name $1;
}
fastcgi_pass $byr_pass;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name;
fastcgi_param REDIRECT_STATUS 200;
include /etc/nginx/fastcgi_params;
}
location @php_admin_modules {
if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) {
set $byr_module $1;
set $byr_rest $2;
}
fastcgi_pass $byr_pass;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest;
fastcgi_param REDIRECT_STATUS 200;
include /etc/nginx/fastcgi_params;
}
Following is the requested url which ends up with "404":
http://www.{domainname}.com/admin/modules/cms/styles/cms.css
Following is the error log:
[error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com"
Following urls works fine:
http://www.{domainname}.com/admin/modules/store/?a=manage
http://www.{domainname}.com/admin/modules/cms/?a=cms.load
Can anyone see what the problem could be? Thanks.
PS. I am trying to migrate existing sites from apache to nginx.
I'm trying to configure my Windows 2008 servers so that my developers can view their status without needing to log on to the box or be an admin. Unfortunately, the permissions set in Windows 2008 for remote non-admin users don't include the ability to enumerate or otherwise query services. This causes anything that contacts the SCM on the far end to fail (Win32_Service, sc.exe, services.msc etc).
How do I set up permissions so that they can at least list the services and see if they are running?
Hi,
I'm working with some partners in the UK. My office is in Vietnam. We are having a network problem: My partner can access an internal website using the domain name, e.g. http://website.name, but I can only access that website using the direct IP, e.g. 192.168.1.85. I can not ping that web server using "ping website.name" but it works if I using "ping 192.168.1.85".
I want to use the domain name. Please help.
I frequently work in different locations, and need to have a virtualbox version of Ubuntu server running locally.
While I was at home getting it set up, I was able to ssh into the server using the locally allocated IP address. However, now that I'm elsewhere, ifconfig is still showing the old 10.0.x.x ip address, but instead of being in the 10.0.x.x space, my laptop's ip starts with 192.168.x.x
With that in mind, if there a straightforward way to set up the virtual box Ubuntu server in such a way that I can just connect using "ssh servername" regardless of it's ip address?
Good time of day, SF people. I have created a manual DHCP binding entry on a Cisco router so that a client would always get leased to it. The clients wants to get the same address on both of his dual-boot linux systems. He tries to get an IP address leased and he succeeds on one of the dual-boot operating systems. When he reboots to another one he gets a lease for a completely different one.
I don't get it. The MAC addresses are the same (we checked in ifconfig, so what could be happening here? Why is the router confused? Or is it something else?
Also, how can I check DHCP server IP address who I have got an IP address from (on Linux)?
Configuration on Cisco:
ip dhcp pool MANUAL_BINDING0001
host 192.168.0.64 255.255.255.0
hardware-address dead.beef.1337
dns-server 192.168.8.11
default-router 192.168.0.254
domain-name verynicedomainigothere.cn
PS. Is it mandatory to use client-name configuration line?