Search Results

Search found 6069 results on 243 pages for 'tvlife admin'.

Page 90/243 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How to eliminate the domain suffix off my user profile folder when migrating to a new domain?

    - by Jerry Dodge
    We have just upgraded a decade old SBS 2003 server to a brand new SBS 2011 machine. During the process, over 30 other client/server machines on that domain also needed to be dis-joined and re-joined from the old domain to the new one. These domains have different names and is not migrated in any way. It's built from scratch. Since each client machine had very unique user profiles under this domain, we needed to make sure these were all backed up and migrated over to the new domain. For the most part, profiles were migrated with no hassle, just by renaming the user profile folder names. However, in one case, when I log in to my domain account, it creates a profile folder with a suffix of the new domain name. I have replaced all the files in the profile's root which begin with "ntuser" with the files of the new profile. The only problem is half the applications can't find their data, because the folder name is different. How can I change this folder name and maintain this profile on the new domain? I have deleted every user account (except admin), deleted their profiles/folders, removed them from the registry, and made sure every trace of this account was gone. The computer was basically a dummy with only an admin account. Then, I log into the machine under my new domain user account (same username as the old domain). It creates a profile folder with my username plus a suffix extension of the new domain name. The client machine is Windows 7 Ultimate, the old server was SBS 2003, and the new server is SBS 2011.

    Read the article

  • 'The rpc server is unavailable' or 'access is denied' error when using Remote desktop Services Manager on Windows 7 (but mstsc.exe works!)

    - by tbone
    I am trying to connect to a Windows XP workstation from a Windows 7 Ultimate workstation using Remote Desktop Services Manager. I am able to do a Remote Desktop (mstsc.exe) session from the Win7 machine to the WinXP machine with no problem at all. When running the Remote Desktops Admin (tsmmc.msc) too on a Windows XP box, I can also connect with no problem. However, when I use the new Remote Desktop Services Manager on Windows 7 and try to connect, I get the error: "The rpc server is unavailable" What could cause this? Has there been some fundamental change in Remote Desktop Services Manager, does it connect in a different way somehow? Update #1 Turned off firewall on the Windows XP box and the "The rpc server is unavailable" error went away; so RDSM seems to be using an entirely new port/connection/service compared to mstsc.exe or the old Remote Desktops Admin tool. Now... after disabling the firewall, I get a new error: Access is Denied. After doing some googling, I found some articles discussing this; basically, the error is very misleading - the actual problem is, if either side of the connection has dual monitors, and they are not both Win7 Ultimate, then you cannot connect using Remote Desktop Services Manager...the reason is, by default it uses the /multimon switch, and this switch requires a certain level of Windows license - and, there seems to be no way of changing this default (if anyone knows of a way to change this default, please post an answer or comment!). Nice going Microsoft. http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2rds/thread/4d06278f-e0f4-4f8e-a8e1-3697ee967ef4 http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Windows/Windows_7/Q_26225743.html

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • Windows 7 - SBS - Why does copying a directory not include all subdirectories and files?

    - by indeed005
    Using Windows 7, fully updated... I have had some strange behaviour copying a whole user directory (e.g. "c:\users\bob" to "c:\backups\bob") I understand now that I should have used Easy Transfer or at least robocopy, but at the time all I wanted to do was backup the user's data before using the "Delete account" button. Unfortunately, I didn't check that my copy-paste had actually worked; all it had actually done was copied the appdata subdirectory of the user account. At the time of doing this backup I was logged in as the same user, bob (a local admin) in this example. When I discovered the missing files, I tried again using the domain admin account -- same story. Only appdata copied. No documents folders, nothing else. Then my boss tried, and it worked fine -- it copied all the files. Ctrl+C, Ctrl+V. I tried again... same profile... copied all the files. Same profile, same destination, same rights and permissions, same ownership, but different behaviour. Has anyone encountered this before and come up with a solution? BTW this was not using roaming profiles and the accounts are stored locally.

    Read the article

  • How to install wordpress without a web browser

    - by bvandrunen
    What I am trying to do is to automate wordpress website creation for the company I am working on. We have lots of information in our database for our customers and we want to create a wordpress website for each customer. The process works great and we have no trouble with the creation of websites/transfer of data or anything like that. The problem we do have is when we buy a new domain (http://www.newdomain.com) our process breaks (we call a stored procedure which installs all the data after the URL is called to install wordpress) if the domain takes more than 15min to resolve. We have tried doing looping (where the process checks to see if the domain resolves and keeps trying - but eventually if fails). So what we are looking for is to see if there is a way to install an URL without actually having the domain resolve yet. I have seen where possibilities where you can change the wp-config file but this doesn't work since we have more than one domain and it changes the source URL for all the domains. What we really need is just a way for us to manually start the install script through a call either through a database or some other way that doesn't check to see if the domain is resolved or pointing at the server or not. Thank for any suggestions. EDIT: All we do to install wordpress is call this URL: http://"newdomain".com/wp-admin/install.php?step=2 - if you change settings in the backend calling this URL will install wordpress without having to go through the wp-admin/install.php form

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • How do I enable Ubuntu Gnome system tools

    - by RussellW
    I am running Ubuntu 10 with Gnome 2.30.2. This is a VMWare workstation image provided by another company that I do not have support in this regard. I am trying to access the graphical tools for configuring the network, users, and services but the System-Administration menu does not have these options listed. The main issue I am trying to solve is to correct the problems with the gnome menu options and network settings I have the gnome-system-tools package installed, and I am unable to run command-line versions of the tools, such as nm-applet (I get no GUI if I run that command, the process is running in the background). I realize that I can perform many tasks command-line, but I would like to use the GUI for administrative functions as I am not overly proficient for all command for restarting services and setting a static IP with a specific gateway. Further, I can run gnome-nettool, but I cannot change the IP, I can only see my network card. nm-connection-editor does not show any network cards that I can configure to change the IP. Currently, I am getting a DHCP through my NAT in VMWare, I want to set it to a specific IP address though. Preferences Menu (note some missing options) ![Preferences Menu][1] Admin Menu (note some missing options) ![Admin Menu][2] Network Tools (I can view but not change IP address) ![Network Tools][3] Network Settings (Unable to change IP address) ![Network Settings][4] Network Connections (no connections listed, not even my existing ethernet NAT connection through VMWare) ![Network Connections][5] See images here that I have referenced: 1- http://i.imgur.com/kl8pP.png 2- http://i.imgur.com/K3Cjz.png 3- Iq7Xb.png 4- 7wheV.png 5- J2ad8.png

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Windows 7 Taskbar resets on every login

    - by Arne Mertz
    I like to reorder my Taskbar a bit, other than the Windows 7 default is. I use two "rows", the lower is for quicklaunch and other toolbars: This works perfectly, as long as I don't log off from the computer. Every time I log in, Windows 7 has messed up/reset the toolbar positions like this: So I have to drag them into position again and again, every morning. Fixing the taskbar positions won't help, I tried to google for the problem but it does not seem to be very common. Does anyone recognize that problem and has a solution? Update: This is not the AutoLogon bug. AutoLogon is off. We have installed Novell at our company, and it does not matter wether I log directly onto the Novell network or only to the computer first and to Novell later. Update2: I get the same issue when I logon without Novell, i.e. when I log on only to the computer. When I boot in safe mode, the taskbar looks essentially the same: Update3: KB979155 says it's "not applicable to my system". Creating a neew user is not an option since I don't have the admin privileges to do that - I have almost any other local admin privileges, though.

    Read the article

  • OpenLDAP 2.4.23 - Debian 6.0 - Import schema - Insufficient access (50)

    - by Yosifov
    Good day to everybody. I'm trying to add a new schema inside OpenLDAP. But getting an error: ldap_add: Insufficient access (50) root@ldap:/# ldapadd -c -x -D cn=admin,dc=domain,dc=com -W -f /tmp/test.d/cn\=config/cn\=schema/cn\=\{5\}microsoft.ldif root@ldap:/# cat /tmp/test.d/cn\=config/cn\=schema/cn\=\{5\}microsoft.ldif dn: cn=microsoft,cn=schema,cn=config objectClass: olcSchemaConfig cn: microsoft olcAttributeTypes: {0}( 1.2.840.113556.1.4.302 NAME 'sAMAccountType' DESC 'Fss ssully qualified name of distinguished Java class or interface' SYNTAX 1.3.6. 1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {1}( 1.2.840.113556.1.4.146 NAME 'objectSid' DESC 'Fssssull y qualified name of distinguished Java class or interfaced' SYNTAX 1.3.6.1.4. 1.1466.115.121.1.40 SINGLE-VALUE ) olcAttributeTypes: {2}( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' DESC 'Fds sssully qualified name of distinguished Java class or interfaced' SYNTAX 1.3. 6.1.4.1.1466.115.121.1.15 SINGLE-VALUE ) olcAttributeTypes: {3}( 1.2.840.113556.1.4.1412 NAME 'primaryGroupToken' SYNTA X 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {4}( 1.2.840.113556.1.2.102 NAME 'memberOf' SYNTAX 1.3.6.1. 4.1.1466.115.121.1.12 SINGLE-VALUE ) olcAttributeTypes: {5}( 1.2.840.113556.1.4.98 NAME 'primaryGroupID' SYNTAX 1.3 .6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcObjectClasses: {0}( 1.2.840.113556.1.5.6 NAME 'securityPrincipal' DESC 'Cso ntainer for a Java object' SUP top AUXILIARY MUST ( objectSid $ sAMAccountNam e ) MAY ( primaryGroupToken $ memberOf $ primaryGroupID ) ) I also tried to add the schema by phpldapadmin, but gain the same error. I'm using the admin user which is specified by default from the begging of the slpad installation. How may I add permissions to this user ? Best wishes

    Read the article

  • The Web Hosting Connundrum for "not quite" developers

    - by saltcod
    Hey all, Apologies if this post feels like its been covered elsewhere, but I don't think it has. I've been down a winding web hosting road. To date, I've tried: Joyent, Media Temple, Bluehost, Hostgator, and finally Linode. The reason for switching are likely obvious to everyone: speed. With the exception of the lightening fast Linode, all of the shared hosts are absolutely sloooow. What do do when you're not really a "developer" While I'v grown addicted to the speed of Linode, I really don't feel like its where I should be. I have this nagging feeling in the back of my mind that one of these days (likely soon), I'm going to run into something that i won't be able to figure out and i'll have days worth of downtime. Just the other day, for example, I realized that one of my domains wasn't sending emails. After 4(!) hours looking into the problem, I still can't get sendmail or postfix to work. Four hours!! I want to be a Drupal expert, not a Ubuntu expert That's really the heart of my problem: I spend way too much time learning Ubuntu's ins-and-outs, and not nearly enough time working on Drupal. So here goes: Is there a web host out there anywhere that offers the speed of Linode, but will let me focus on Drupal instead of sys-admin-ing? Thanks! [ I know, I know. There are going to be lots of people who read this saying - "just learn Ubuntu like a real developer". And I get that. I do. But when I work full-time and try and develop some of these sites in my evenings and weekends, I'm really feeling like the sys-admin stuff gets in the way.

    Read the article

  • How to create location blocks in nginx for a single file but have it follow the rules of another location block in addition to it's own?

    - by Ryan Detzel
    I have a location block for / that does all of my fastcgi stuff and it has a normal timeout of 10s. I want to be able to have different timesouts for certain files(/admin, sitemap.xml). Is there an easy way to do this without copying the entire location block for each location? location /admin{ fastcgi_read_timeout 5m; #also use the location info below. } location /sitemap.xml{ fastcgi_read_timeout 5m; #also use the location info below. } location / { fastcgi_pass 127.0.0.1:8014; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; }

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • Drive security settings in Windows 8 Pro

    - by Donotalo
    My PC OS is Windows 8 Pro x64. Windows 8 seems confusing. D:\ drive is supposed to be used solely by a single user, who is in Users group of the PC. The requirement is... that user will have full control of D drive. Admins will have full control of D drive. All other users can only list drive contents. No file could be opened. My account is admin account. From D drive's property Security tab, I've set the following: Allow "List folder contents" for Authenticated Users group. Allow "Full control" for SYSTEM. Allow "Full control" to specific user, who's supposed to use the drive. Allow "Full control" for Administrators group of the computer. Allow "List folder contents" for Users group. After setting this up, the specific user have full control of D drive. No other user can open any file on D drive. But though my account is an admin account, no file on D drive could be opened from my account! Why is this happening and how files can be opened from my account? Note: All accounts in this PC are local accounts.

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • customer wont provide ssh access - ftp only

    - by Max
    Eh, here is my problem: I am working in a webdevelopment agency (thats a problem but not the real problem, read on). Most of the time I choose the live server myself when creating a new website project. But now the customer already has a "server" (10 GB on a cheapo host!) and the "admin" refuses to give me ssh access to it. But I need to access the server via shell because many files will be transported (need to be able to upload and extract a tar) and I need to insert or create mysql dumps via command line. He argues FTP and phpmyadmin should be enough... as far as I know the webspace was just ordered to host the website, so no security critical apps are running there. How can I either convince the admin to give me the ssh login or tell management that we need our own server? Anyone with similiar experiences? This is really annoying as this is a very small project that should be done fast and now one has to fight in order to just get the work done...

    Read the article

  • Better urls for this internal web server?

    - by sprugman
    I've got a server that I have admin access to, but don't fully manage. (I think it's a virtual machine, but I'm not 100% sure. It's running Apache on Windows Server 2003.) I share the ip with another user, so my sites all have to use the :8080 port. This is kind of ugly. Also, AFAIK, the only access I have is through an ip address. (I'm inside a corporate firewall and don't think I have access to a DNS server or anything.) I've adjusted my hosts file so I don't have to use the ip address on my local machine, but that's not a very generic solution. Are there any options to 1) get rid of the port requirement 2) be able to use a name (maybe a machine name) instead of the ip address in a generic way? (I'm not really a network admin -- I'm a developer managing this machine. The IT folks who really manage it are a few people away from me and tough to get to do anything, so I'm looking for a light-weight solution if possible.)

    Read the article

  • how do i completely delete ask.com from my computer?

    - by celyn
    I have used Final Uninstaller (unregistered version) to remove it. So it removed the toolbar and the things in its folder from C:Program Files/Ask.com except for one thing; remaining are "Ask.com" folder > "Updater" folder > "Updater.exe" I have not checked my registry yet. But if there is something I want it to be gone! As to why I can't delete that updater thing, my laptop asks me permission (says need to be admin) whenever I tried to delete anything from ask.com folder, or its folder at all. I have googled, came to and followed the instructions from "Scott McClenning" in this post. Does not really work. When I say "not really", means, this error message pops up everytime I tried to do that: An error occurred applying attributes to the file: C:/Program Files/Ask.com Access is denied. How can I gain access? I AM the admin for this computer. And... don't ask me to download too many things for my computer, it adds to my frustration. Just in case you are wondering, I got this from FormatFactory when I updated it to 2.70. I should not have done so. Update: Now after I restarted my computer, I got the "EVERYONE" group in and it is under Full Control with every box ticked except for the last one (Special). When I tried to delete that folder and the .exe file, this keeps popping up as i click "try again", only goes away when I click "cancel"

    Read the article

  • Windows 7 Account Settings Vanished

    - by Roy Tang
    So I came home and stumbled upon a bit of a mystery. When I got home my brother was using my desktop PC that was running Windows 7 (I have accounts for my 2 brothers and my mom on the machine, but mine is the only admin account). After he finished his game he logged out and I logged in to my account, but found only strangeness. My windows account seems to have been somewhat "reset", meaning: my quick launch shortcuts were gone my dropbox account did not automatically login my pidgin accounts were no longer there I had to relogin Steam iTunes could not launch (I had hooked up my iDevice before logging in) The Documents/Pictures/Music shortcuts in the start menu no longer work However, despite that: my desktop wallpaper was still correct my documents folder was still there in c:\Users\my account name\My Documents as expected Google Chrome settings seem to have been retained other accounts on the same machine seem to be fine I asked my brother if he had installed anything strange during the day, he only installed Yahoo Messenger. I last used the machine around 24 hours ago and it was fine then. I'm not sure what else has been affected. I'm inclined to just create a new admin user for me to use, but I'd like to have some idea of what actually happened.

    Read the article

  • Unwanted blank lines when committing from SVN

    - by Alon_A
    I'm using CentOS Linux 5.8 as a web server and tortoise SVN for synchronizing version of our code. We write the code in Windows 7 professional 64BIT with NetBeans and NotePad++. I'm committing the code files (.php) from the Linux Shell by this command: svn co svnFolder serverFolder --username **** --password **** The problem is, after committing the files, when I'm opening them directly from the server (for debugging) by NotePad++ (I'm doing View/Edit from Filezilla) I have extra blank lines. A code that looks like this on the localhost (On NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; Will look like this after committing to the server (Again, on NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; If I upload files by FTP, No blank lines are being added. How can I solve it ? Thanks.

    Read the article

  • Directories Throwing 404 Errors - Virtual Host Configuration and mod_rewrite

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Software disables itself when the PC is accessed via RDP

    - by blckgrffn
    We have a large, specialty printer that has vendor specific software that enables its use outside of it showing up simply as a printer in the Windows Control Panel. This software recognizes when we RDP into the machine and "disconnects" the PC from the printer within its proprietary control panel. All is well when an application like TeamViewer is used to access the machine. Ostensibly, the application is helping us be safe by "enforcing" that the machine used for the printer is a walk up workstation, or so the support folks informed me. If TeamViewer etc, fixes the issue, then what is the problem? We have many headless workstations in our warehouse attached to a variety of specialty machines, all used via RDP. We want/need to keep access to the machines the same for the sanity of our production staff. The meat of the question - how, specifically, might a machine know that it is being accessed via RDP (terminal services management???) and how might this be defeated without altering an application or driver. Of note, the system being used is a Windows 7 Pro machine hooked to the printer via USB. Thanks! Nat edit Is there any combination of /admin switches, etc. that will possibly fix this? Simply putting /admin did not.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >