Search Results

Search found 17625 results on 705 pages for 'techno log'.

Page 582/705 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Hiding my location to websites with region-specific languages/content

    - by Tudor
    I just went to download Microsoft Secority Essentials and it enraged me as it redirected me to a site in my home language and not the default English. If I go to America, I don't want them to speak Swahili. It reminded me of all the other websites who try to do the same. I don't want my content in greek when I'm on vacation! I for one simply can't work on a computer unless the language is English (or unless there's a VERY good reason to change the language). Location aware content is only good for download mirrors, and even then I would rather pick from a list of countries myself. (or if you can't speak anything but your own language) I know websites get your location from your IP and ISP, but is there any way you can inhibit this behaviour on a browser level? Is there any Chrome/Firefox extension for it? Do I really have no choice but to hide my IP? There's all sorts of services that claim they're hiding your IP for free so that people can't log and trace your steps through the internet, but they're probably logging it themselves and making money off it. Why else would they be free? I've found that Firefox has an Option that says "Choose your preferred language when displaying pages". Haven't found anything for Chrome.

    Read the article

  • Join Domain from VM

    - by Adis
    I have two VMs running on VMWare Player. I use NAT adapter settings. The host machine for VMs is running on corporate network. First VM has Domain controller running and I can log in on that machine using domain credentials. I named domain wm.local When I run IP config on this machine: IP: 192.168.87.132 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 Second VM cannot join domain. When I try it with domain WM I'm propmted for credentials. And I enter Administrator credentials and than it waits for some time and I get response: "The specified domain either does not exist or could not be contacted" If i type wm.local as domain when trying to join it does not prompt me to login but just shows "An Active Directory Domain Controller (AD DC) for the domain wm.local could not be contacted. And here it takes no time to get this error message. Ipconfig on this machine: IP: 192.168.87.134 Def Gataway: 192.168.87.2 DNS server: 192.168.87.2 DHCP server: 192.168.87.254 I can ping second VM from first one. And I disabled firewalls on both machines. Any ideas? Is there any manual for this?

    Read the article

  • Does Xenapp require Windows Terminal Services (Remote Desktop) licenses?

    - by John Virgolino
    We have a Xenapp 5.x server running for over a year now. It does not have any purchased Terminal Services (Remote Desktop) licenses installed. It is running on a Windows 2008 Server box. I am aware that Terminal Services runs fine for about 3 months and then supposedly stops issuing licenses. On occasion, Xenapp stops working and we see lots of License errors in the event log, although not necessarily every time. In most cases, a reboot or 2 resolves the problem. We figured it was because of the lack of TS licenses. I spoke with Citrix and they said we had to have the licenses, but it begs the question that if we have to have the licenses, how does it work the majority of the time without them!!?? I have not received a straight answer yet and before I tell my client to shell out more money, I need to understand the technical reasoning for how this is actually working if we are breaking the rules here. We will buy the licenses if necessary, but there has to be an explanation for this. I am hoping the community can help where Citrix apparently cannot. Thanks much!

    Read the article

  • can't install anything anymore with apt-get

    - by Aymane Shuichi
    Welcome this is the log I have when trying to install anything (php5-fpm after removing it) apt-get install php5-fpm Reading package lists... Done Building dependency tree Reading state information... Done php5-fpm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up php5-fpm (5.4.4-14+deb7u10) ... insserv: warning: script 'S55IptabLes' missing LSB tags and overrides insserv: warning: script 'S55IptabLex' missing LSB tags and overrides insserv: There is a loop between service IptabLes and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service networking at depth 7 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: There is a loop between service rc.local and mountall if started insserv: loop involving service mountall at depth 6 insserv: loop involving service checkfs at depth 5 insserv: loop involving service kbd at depth 11 insserv: There is a loop between service rc.local and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 7 insserv: loop involving service urandom at depth 9 insserv: There is a loop between service IptabLes and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rc.local if started insserv: There is a loop at service IptabLes if started insserv: Starting IptabLes depends on rc.local and therefore on system facility `$all' which can not be true! (x99 times repeated ) insserv: Max recursions depth 99 reached insserv: loop involving service postfix at depth 2 insserv: There is a loop between service IptabLes and udev if started insserv: loop involving service mountkernfs at depth 1 insserv: loop involving service IptabLes at depth 1 Now here is the error i get insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing php5-fpm (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: php5-fpm E: Sub-process /usr/bin/dpkg returned an error code (1) The biggest operation I held before this was updating nginx from 1.2 to 1.6 and it was thanks to this site : here is the link : How to upgrade nginx from 1.2 to 1.6 on debian 7 Please help !

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • Hibernate & Sleep broken after IE 9 RTM installation in Windows 7 x64.

    - by AKa
    I have a question about hibernation. I installed Internet Explorer 9 RTM x64 on my Windows 7 x64 SP1 desktop machine. After this, computer don't entry the hibernation or (hybrid) sleep state properly. After I hibernate the computer the monitor become blank screen and the keyboard and mouse are inactive. But the machine is still running and there isn't any possibility to switch them off as only with power button. But this is recognized on next start as ineligible because of log entry with message “The previous system shutdown at xx:xx:xx on ?xx.?xx.?x was unexpected“ and menu with safe mode option. I’m clearly not sure if it has something to do with Internet Explorer installation, but I’m definitely guaranteed that before of this I never had some problems with hibernation or (hybrid) sleep. In Windows logs isn’t something suspect. I switched the hibernation off and on, installed new drivers for mainboard, graphic and network card, checked the hard disk, nothing was helpful. This is really sad, beacuse I don't like to switch the computer completely off because it takes longer to boot. Any suggestions?

    Read the article

  • Mysql Query - That Is Returning Blatanty Incorrect Result

    - by user866190
    I am building a VPS node that is running Ubuntu 10.10LTS, Apache2, Mysql 5.1 and php5. I could not log in to my website admin through the browser, even though I am using the correct login details. So I logged in from the command line to check the results. When I run this query I get expected results: mysql> select * from users; +----+----------+-----------------------+----------+ | id | username | email | password | +----+----------+-----------------------+----------+ | 1 | myUserName | [email protected] | myPassword | +----+----------+-----------------------+----------+ And the same goes for this query: mysql> select * from users where id = 1; +----+----------+-----------------------+----------+ | id | username | email | password | +----+----------+-----------------------+----------+ | 1 | myUserName | [email protected] | myPassword | +----+----------+-----------------------+----------+ 1 row in set (0.00 sec) But when I run this query I get this 'unexpected response': mysql> select * from users where username = 'myUserName' and password = 'myPassword'; Empty set (0.00 sec) I am not sure why this is happening. Any help would be greatly appreciated. BTW.. I will be encrypting the user details but for now I just want to get it set up. Please help, Thanks

    Read the article

  • Anyone have a script to delete a specific local windows profile?

    - by Jordan Weinstein
    I'm looking for Powershell (preferred) script, or .CMD or .VBS, to delete a specific user profile on a workstation (WinXP) or terminal server (2000, '03 or '08). I know all about the delprof utility... That only allows you delete based on a period of inactivity. I want a script to: prompt admin for a username delete that username's profile and to delete the entire profile - registry hive too (not just the folder structure within Documents and Settings). The same way it would if you went to My Computer Properties Advanced tab User Profiles Settings and deleted profiles from there. Any ideas? All I can think of is doing an AD lookup to get the SID of the user specified, then using that to delete the correct registry hive too... something simpler would be nice though... Basically, my HelpDesk used to be local administrators on our Citrix servers and a common fix for various issues was for them to delete a user's profile on the citrix server(s) and have that user log back in - voila, whatever issue they had was resolved. Going forward, in new Citrix environment, they will no longer be local admins on those boxes, but still need to be able to delete profiles (deleting the entire profile: folder and reg hive is key). thanks.

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • XFS: No space left on device

    - by beketa
    I am using XFS on small HDD (/dev/sdb1, less than 1TB) and storing many small files (-32KB). df -h and -i show that it has available space. # df -hv Filesystem Size Used Avail Use% Mounted on /dev/sda3 127G 19G 102G 16% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 168K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 99M 20M 75M 21% /boot /dev/sdb1 136G 123G 14G 91% /mnt/sdb1 # df -iv Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 8421376 36199 8385177 1% / tmpfs 4126158 5 4126153 1% /lib/init/rw udev 4124934 671 4124263 1% /dev tmpfs 4126158 1 4126157 1% /dev/shm /dev/sda1 26112 222 25890 1% /boot /dev/sdb1 24905120 11076608 13828512 45% /mnt/sdb1 However I got No space left on device error. # touch /mnt/sdb1/test touch: cannot touch `/mnt/sdb1/test': No space left on device I think inode64 issue is not related to this case because drive is less than 1TB and df -i shows that there are free inodes. I unmounted and mounted with -o inode64 but got the same error. xfs_repair does not report any problem. xfs_info shows drive information as follows. # xfs_info /dev/sdb1 meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=2227764 blks = sectsz=512 attr=2 data = bsize=4096 blocks=35644210, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17404, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any ideas? Thanks!

    Read the article

  • where Redirect permanent rule need to be add

    - by eli1128
    I want redirect my web site http request to https my web site is https://test my apache is version 2.4 and ssl configration is (ssl.conf) on separate file from httpd.conf and I am not using .htaccess file so where I should append. i have tried on both file but didn't work. Redirect permanent / https://test is that should be on my httpd.conf or ssl.conf or did I miss something else. I prefer to use redirect over rewrite. Rewrite.log 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (2) init rewrite engine with requested uri /error/HTTP_BAD_REQUEST.html.var 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (3) applying pattern '^(.*)$' to uri '/error/HTTP_BAD_REQUEST.html.var' 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (4) RewriteCond: input='off' pattern='!=on' = matched 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (2) rewrite /error/HTTP_BAD_REQUEST.html.var - *ttps://test/error/HTTP_BAD_REQUEST.html.var[QSA,R=301,L] 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial /redir#1] (2) implicitly forcing redirect (rc=302) with *ttps://test/error/HTTP_BAD_REQUEST.html.var[QSA,R=301,L] 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (1) escaping *ttps://test/error/HTTP_BAD_REQUEST.html.var[QSA,R=301,L] for redirect 10.10.86.1 - - [05/Apr/2012:15:10:19 --0700] [test/sid#7ce00][rid#277448/initial/redir#1] (1) redirect to *ttps://test/error/HTTP_BAD_REQUEST.html.var%5bQSA,R=301,L%5d [REDIRECT/302]

    Read the article

  • Apache Proxy Error AH00920 and AH01206

    - by Aptos
    I ran into this problem when trying to enable proxy function on apache http: My httpd.conf LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so <VirtualHost localhost:57173> ServerName localhost <Location /balancer-manager> SetHandler balancer-manager Order Deny,Allow Deny from all Allow from localhost </Location> <Proxy balancer://mycluster> BalancerMember http://localhost:9999 BalancerMember http://localhost:9998 status=+H </Proxy> <Proxy *> Order Allow,Deny Allow From All </Proxy> ProxyPreserveHost On ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ ProxyPassReverse / http://localhost:9999/ ProxyPassReverse / http://localhost:9998/ </VirtualHost> When I check the log: [Sat Oct 27 00:16:24.506220 2012] [proxy:crit] [pid 3153] (13)Permission denied: AH00920: Failed to reopen mutex balancer://mycluster in child [Sat Oct 27 00:16:24.506407 2012] [proxy_balancer:crit] [pid 3152] (13)Permission denied: AH01206: Failed to init balancer balancer://mycluster in child Can someone please help me :( I tri googling it but there was no answer for this problem. Thank you very much

    Read the article

  • Silent install of Japanese Language Pack in Win7

    - by Doltknuckle
    Every year, due to re-imaging, I am forced to find a way to install the Japanese language pack on a collection of 30 computers. Each year I look for a way to automate this process, and each year I am forced to do this manually. Maybe this year will be different. Has anyone had any luck with installing and configuring far east language support for windows 7 without user interaction? I have already downloaded kb972813 and have a way to get it out to the computers. What I normally do is this: Run the EXE, use the default settings. Open up language settings and create the JP keyboard. Configure the language bar settings. Copy settings to default user. Delete the local user cache. Sign the different user accounts in to make sure that the default settings are correct. This whole process takes about 10 minutes, multiply that out by 30 machines and you are looking at a 5 hour process. If I can log into all of the computers at once, I can normally cut that down to about an hour. Any ideas would be appreciated. Thanks in advance

    Read the article

  • Static IP addressing issue in Ubuntu on BeagleBoneBlack Rev C

    - by Stringfellow
    I have my BBB configured to use a static IP address using the following in the file /etc/network/interfaces: allow-hotplug eth0 iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 This seems to work ok on boot, but when the ethernet cable is unplugged and then plugged back in, I lose the IP address. Any ideas what's going on here? Another weird symptom: If I boot the BBB with the network cable unplugged, but the switch it's plugged into off, I'll get my static IP. But, when I turn the switch on, I'll get a DHCP-assigned address. This is even though I have it configured with a static IP address. One last thing. If I ifdown etho, the interface will be gone when I do an ifconfig. If I wait a few seconds, though, and then re-run ifconfig, it will reappear, without an IP address. (Before I disabled IPv6, I used to get a IPv4 DHCP address in this case... weird). When that happens, I get a message like this in /var/log/messages: Apr 23 20:32:06 beaglebone kernel: [ 737.170172] libphy: 4a101000.mdio:00 - Link is Up - 100/Full Apr 23 20:32:06 beaglebone kernel: [ 737.170304] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Here's my uname -a: root@beaglebone:/etc# uname -a Linux beaglebone 3.8.13-bone47 #1 SMP Fri Apr 11 01:36:09 UTC 2014 armv7l GNU/Linux Any ideas what's going on here?

    Read the article

  • What's the safest way to kick off a root-level process via cgi on an Apache server?

    - by MartyMacGyver
    The problem: I have a script that runs periodically via a cron job as root, but I want to give people a way to kick it off asynchronously too, via a webpage. (The script will be written to ensure it doesn't run overlapping instances or such.) I don't need the users to log in or have an account, they simply click a button and if the script is ready to be run it'll run. The users may select arguments for the script (heavily filtered as inputs) but for simplicity we'll say they just have the button to choose to press. As a simple test, I've created a Python script in cgi-bin. chown-ing it to root:root and then applying "chmod ug+" to it didn't have the desired results: it still thinks it has the effective group of the web server account... from what I can tell this isn't allowed. I read that wrapping it with a compiled cgi program would do the job, so I created a C wrapper that calls my script (its permissions restored to normal) and gave the executable the root permissions and setuid bit. That worked... the script ran as if root ran it. My main question is, is this normal (the need for the binary wrapper to get the job done) and is this the secure way to do this? It's not world-facing but still, I'd like to learn best practices. More broadly, I often wonder why a compiled binary is more "trusted" than a script in practice? I'd think you'd trust a file that was human-readable over a cryptic binaryy. If an attacker can edit a file then you're already in trouble, more so if it's one you can't easily examine. In short, I'd expect it to be the other way 'round on that basis. Your thoughts?

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

  • Group Policy - Published software not upgrading

    - by VokinLoksar
    I'm testing this with mercurial MSIs, but it's the same for other packages. I've created a new group policy and added an old version of mercurial to User software installation as a Published package. On a Windows 7 client I install the package through Programs and Features. The installation works fine. Now, I would like to publish an updated version of mercurial. I create a new Published package. Under 'Upgrades' I configure it to replace (upgrade also doesn't work) the old version and mark this upgrade as 'Required'. The old package is not removed. The Windows 7 client is then restarted. When I log back in, I see a status message saying something like 'Removing managed software Mercurial ...'. There is no message about installation of the upgrade. If I look in Programs and Features, I can see the new version of mercurial listed. However, the actual mercurial directory under Program Files is missing. It's as though the installation recorded information about the MSI, but didn't actually install anything after removing the old version. As I mentioned, this isn't specific to mercurial. I've tried using other apps and have yet to find one that can be upgraded via a Published package. Using Assigned packages in Computer Configuration works without problems, but I would like this software to be optional rather than required. Ideas?

    Read the article

  • Redhat Linux password fail on ssh

    - by Stephopolis
    I am trying to ssh into my linux machine from my mac. If I am physically at the machine I can log in with my password just fine, but if I am sshing it refuses. I am getting: Permission denies (publickey,keyboard-interactive) I thought that it might be caused by some changes that I recently made to system-auth, but I restored everything to what I believe was the original format: #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_fprintd.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so But I still could not ssh in. I tried removing my password all together and that didn't seem to help either. It still asks and even entering an empty string (nothing) it still fails me out. Any advice?

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Issue with a secure login - Why am I being redirected to the insecure login?

    - by mstrmrvls
    Im having some issues getting a website working at my place of work. The issue was rasised when a "double login" occurred from the secure login site. The second login was actually being prompted by the HTTP domain and not HTTPS. In essence the situation is like this: The user navigates to https://mysite.com/something The login prompt pops up Enter username and password The user is presented with ANOTHER login prompt (IE will say its insecure, and the address bar reflects that) If the user puts in their password the insecure one, they will login to the insecure site. if they hit cancel it will present them with a 401 page Navigating back to https://somesite.com/something will by pass the login prompt and log them in to the secure site automatically (cookie maybe) I'm a bit confused to why the user isnt being logged in properly the first time (redirected to non-ssl) but any consecutive login will be okay? I've been trying to use fiddler to see what is happening after the user puts in their password the first time and trying to get fiddler to automatically login to the site (with no luck) I believe the website in question is using Basic Digest authentication. Thanks for any help

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • Snort not detecting outgoing traffic

    - by Reacen
    I'm using Snort 2.9 on windows server 2008 R2 x64, with a very simple configuration that goes like this: # Entire content of Snort.conf: alert tcp any any -> any any (sid:5000000; content:"_secret_"; msg:"TRIGGERED";) # command line: snort.exe -c etc/Snort.conf -l etc/log -A console Using my browser, I send the string "_secret_" in the url to my server (where Snort is located). Example: http://myserver.com/index.php?_secret_ Snort receives it and throws an alert, it works, no problem ! But when I try something like this : <?php // (index.php) header('XTest: _secret_'); // header echo '_secret_'; // data ?> If I just request http://myserver.com/index.php, it does not work or detect anything from the outgoing traffic even though the php file is sending the same string both in headers and in data, with no compression/encoding or whatsoever. (I checked using Wireshark) This looks to me like a Snort problem. No matter what I do it only detects receiving packets. Did anyone ever face this sort of problems with Snort ? Any idea how to fix it ?

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >