Search Results

Search found 5932 results on 238 pages for 'conditional comments'.

Page 169/238 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Ubuntu 12.04 - PPTP VPN is the only Internet Access

    - by user212553
    I know this has been covered. I've read dozens of posts but still have questions. I have a work server whose traffic should never leave my house without encryption. The VPN is PPTP. Currently I have a cron job that checks the status of the ppp0 adapter each minute. If the connection drops, which it does fairly often, it shuts key components down. It's fairly easy to restart PPTP with "nmcli con up id 'myVPNServer'" but there's no assurance it will reconnect and I need a better way to stop traffic (other than killing apps) when ppp0 is down. The two options I've seen discussed are the firewall (UFW, Firestarter, IPTables) or the route tables. I could be easily swayed to consider the firewall option but I focused on the route tables since no new function needs to be started. My questions involve the way the route tables change and then specifics on rules. When I start the PPTP VPN the route tables change. That suggests that if the VPN drops, the table will change back, defeating my stated intent of preventing external traffic. How can I make "sticky" changes to the route table that will persist even if the VPN connection drops? Perhaps the check boxes "Ignore automatically obtained routes" or "Use this connection only for resources on it's network" (which are part of the VPN configuration options)? It would seem that, if I can force the active VPN route table to stay in effect, even when the VPN drops, that this will effectively kill any external traffic should the VPN drop. This will give me the latitude to run a routine to restart the VPN from the command line (assuming the route table rules don't prevent me re-establishing the connection). My route table, with the VPN active is (ip route list): Any comments on what 10.10.1.1 is? $ ip route list default dev ppp0 proto static 10.10.1.1 dev ppp0 proto kernel scope link src 10.10.1.11 VPN_Server_IP_Address via 192.168.1.1 dev eth0 proto static VPN_Server_IP_Address via 192.168.1.1 dev eth0 src 192.168.1.60 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.60 metric 1

    Read the article

  • How do you feel about being asked to code during an interview?

    - by Mystere Man
    I have seen a lot of comments about good interview questions and puzzles to require potential developers to solve during the interview process. I have personally had several interviews in which the interviewer has asked me to write some piece of code or solve a problem during the interview, and I have always performed very poorly in these "tests". The reason is simple, as a developer who spends my days talking to computers, I find I have to prepare myself and "switch gears" to be in "interview mode". I prepare myself to make a good impression. When I'm programming, I'm very focused and am totally different from when I'm being "interpersonal". I just can't get into "the zone" when I'm also having to be a charming and witty potential employee. I feel that by asking a developer to prove his skills during an interview, all you're doing is finding out if they can code under pressure, and at the drop of a hat. It has almost no ability to determine how you would perform in a "real life" development situation. Maybe, if you're looking for someone that can code and chat at the same time, i can see how that would be beneficial. But I think you overlook potential candidates that simply do not perform well in such an artificial environment. While I appreciate that a potential employer wants to see what I can do, I don't think an interview is the place for such a test. I mean, suppose a job for an over the road trucker required that you drive while being interviewed. How does that really end well? So I'm curious as to what others think about such situations. Have you failed interviews because you were not in the right frame of mind? Have you failed to make a good interpersonal impression because you were too distracted trying to solve the problem? If you're a hiring manager, or someone that gives interviews, do you even think about such things? Is it really important that someone perform well in an interview? EDIT: To clarify, I'm not against testing applicants. My concern is about testing during an interview. See also: What are the pros and cons for the employer of code questions during an interview? looking at this from the interviewer's point of view.

    Read the article

  • Getting PATH right for python after MacPorts install

    - by BenjaminGolder
    I can't import some python libraries (PIL, psycopg2) that I just installed with MacPorts. I looked through these forums, and tried to adjust my PATH variable in $HOME/.bash_profile in order to fix this but it did not work. I added the location of PIL and psycopg2 to PATH. I know that Terminal is a version of python in /usr/local/bin, rather than the one installed by MacPorts at /opt/local/bin. Do I need to use the MacPorts version of Python in order to ensure that PIL and psycopg2 are on sys.path when I use python in Terminal? Should I switch to the MacPorts version of Python, or will that cause more problems? In case it is helpful, here are more facts: PIl and psycopg2 are installed in /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages which pythonreturns/usr/bin/python echo $PATHreturns (I separated each path for easy reading): :/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ :/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages :/opt/local/bin :/opt/local/sbin :/usr/local/git/bin :/usr/bin :/bin :/usr/sbin :/sbin :/usr/local/bin :/usr/local/git/bin :/usr/X11/bin :/opt/local/bin in python, sys.path returns: /Library/Frameworks/SQLite3.framework/Versions/3/Python /Library/Python/2.6/site-packages/numpy-override /Library/Frameworks/GDAL.framework/Versions/1.7/Python/site-packages /Library/Frameworks/cairo.framework/Versions/1/Python /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload /Library/Python/2.6/site-packages /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/PyObjC /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/wx-2.8-mac-unicode I welcome any criticism and comments, if any of the above looks foolish or poorly conceived. I'm new to all of this. Thanks! Running OSX 10.6.5 on a MacBook Pro, invoking python 2.6.1 from Terminal

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

  • mount.nfs: access denied by server while mounting (null), can't find any log information

    - by Mark0978
    Two ubuntu servers: 10.0.8.2 is the client, 192.168.20.58 is the server. Between the 2 machines, Ping works, ssh works (in both directions). From 10.0.8.2 showmount -e 192.168.20.58 Export list for 192.168.20.58: /imr/nfsshares/foobar 10.0.8.2 mount.nfs 192.168.20.58:/imr/nfsshares/foobar /var/data/foobar -v mount.nfs: access denied by server while mounting (null) Found several things online, tried them all and still can't find any log information anywhere. On the server: [email protected]:/var/log# cat /etc/hosts.allow sendmail: all ALL: 10.0.8.2 /etc/hosts.deny is all comments How can I get a trail of log statements to figure this out? What does it take to get some logging so I have some idea of WHY it won't mount? On the server: [email protected]# nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 21:16 CDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.0000060s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open unknown 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open unknown 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 3.81 seconds From the client: [email protected]:~$ nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 22:14 EDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.73s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open rpcbind (rpcbind V2) 2 (rpc #100000) 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open nfs (nfs V2-4) 2-4 (rpc #100003) 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 191.56 seconds

    Read the article

  • phpMyAdmin: The additional features for working with linked tables have been deactivated.

    - by The Disintegrator
    I'm getting this error in the main page of phpMyAdmin verson: 3.2.1deb1 The additional features for working with linked tables have been deactivated. To find out why click here. When I click the link I get this report. $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... not OK [ Documentation ] General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... not OK [ Documentation ] Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... not OK [ Documentation ] $cfg['Servers'][$i]['pdf_pages'] ... not OK [ Documentation ] Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... not OK [ Documentation ] Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... not OK [ Documentation ] SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... not OK [ Documentation ] Designer: Disabled I already used the script to create the tables. I assigned the permissions to the pma user. And everything is set in /etc/phpmyadmin/conf.inc.php But it's still not working... The tables are empty. I assume that they should have something. I'm interested in the relations an history features. Obviously I have read the documentation. Maybe something else is unsetting those values? Any toughs?

    Read the article

  • The boot selection failed because a required device is inaccessible 0xc000000e

    - by bbodenmiller
    A family member of mine recently went on vacation and turned off their computer, something they normally do not do, upon returning home it would not turn on and now returns the error message below. Generally friends and family come to me for help with computers and I have no problem, however this time I am a bit stumped. Any suggestions would be greatly appreciated. As you can see the error message is: Status: 0xc000000e Info: The best selection failed because a required device is inaccessible. Before going to this error message it briefly flashes the Windows loading screen. I have been able to confirm through the Windows RE Command Line and the dir command that the C: drive is accessible and likely is just suffering a bootup issue. I have tried: Launching the repair process discussed in the error message three times however each time it requires a restart and then returns to the same error message. Changing the boot order to be hard drive first Getting into safe mode; F8 just results in the same error message before I can get to the menu to select safe mode I have checked to make sure the BCD (bcdedit, Boot Configuration Data) is still intact as per https://www.symantec.com/business/support/index?page=content&id=TECH160475 I plan to try (but would like additional comments on): sfc /scannow; requires a restart and thus will likely result in the error message again A memory scan Bootrec as per http://support.microsoft.com/kb/927392#method1 Swapping IDE cables/ports Resetting the BIOS I noticed others with similar issues around the web are dual-booting however this machine is not setup in a dual-boot environment. Additionally at one point this error message supposedly showed up before I started working on the computer: The instruction at 0xfbe2584d referenced memory at 0x00000008. The memory could not be read. As previously stated any additional suggestions or words of advice would be greatly apprecaited.

    Read the article

  • Squid configuration for proxy server

    - by Ian Rob
    I have a server with 10 ip's that I want to give access to some friends via authentication but I'm stuck on squid's config file. Let's say I have these ip's available on my server: 212.77.23.10 212.77.1.10 68.44.82.112 And I want to allocate each one of them to a different user like so: 212.77.23.10 goes to user manilodisan using password 123456 212.77.1.10 goes to user manilodisan1 using password 123456 68.44.82.112 goes to user manilodisan2 using password 123456 I managed to add the passwords and authentication works ok but how do I do to restrict one user to one of the available ip's? I have a basic setup from different bits I found over the internet but nothing seems to work. Here's my squid.conf (all comments are removed to make it lighter): acl ip1 myip 212.77.23.10 acl ip2 myip 212.77.1.10 tcp_outgoing_address 212.77.23.10 ip1 tcp_outgoing_address 212.77.1.10 ip2 http_port 8888 visible_hostname weezie auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid-passwd acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT hosts_file /etc/hosts forwarded_for off coredump_dir /var/spool/squid

    Read the article

  • How to construct SELinux rules for a Glassfish server

    - by tronda
    I'm running Glassfish 3.1 on a CentOS 6 solution and by default SELinux is enabled. I have installed Sun's JDK version 1.6.0_29 on the server and extracted the Glassfish 3.1.1 to /opt/glassfish-3.1.1 with a link /opt/glassfish pointing to the latest Glassfish version. I've also created a system user named glassfish with a home directory /home/glassfish. When running with SELinux enabled I get all sorts of errors. For instance I'm not able to create the domain. I kind of like the concept of SELinux, and would like to be able to have SELinux enabled. I have the following requirements for the Glassfish server: Listening to port 8080 and 8081 Other ports 7676: JMS 8686: JMX monitoring, 4848: Admin console Forwarding from apache to Glassfish through mod_jk and port 8009 Starting OpenMQ as an separate process which listens to 7676 and it's JMX monitoring port 7776 Able to read and write files at a specified area (different from home directory) Able to use /tmp/ for temporary files I am aware of the audit2allow tool when running in permissive mode, but I struggle with understanding the rules that is generated from this tool, and thought that setting up these rule manually the first time would help me understand the SELinux rules better than the simplistic examples that I've seen so far. Can someone with SELinux experience help me form these SELinux rules with comments describing each part of the rules?

    Read the article

  • SAN Replication for Fault tolerance using EVA4400

    - by Sergei
    Hi Everyone, I hope that someone would point me in the correct direction - it looks like I have no enough konwledge in the subject and timeframes are too tight for me to explore different scenarios in depth.. We have two datacenters few miles away from each other connected by 100 Mbps link.Each datacenter will have 5 BL490 blades with ESX Standard hosting about 50 VMs. Eac hsite has HP eva4400 SAN with SAN replication set up.VC is going to be in the first datacenter and both datacenter are networked. SAN Replication is block level so it seems like I cannot just replicate changes but all writes would have to be replicated.This should not be a problem as link can sustain about 1.8 TB a dayand data can be buffered. I am having trouble however visioning how recovery would work in this case.We don't need instant recovery , I would say 4 hours recovery time is accepted so fancy automatic SRM like DR scenario would not be easily accepted due to the financial reasons, however any comments are welcomed. Current idea is following: replicate LUNs from primary site to the secondary.When disaster strikes, IT personnel switches on ESX hosts on the remote side and connects replicated LUNS to them, then registers VMs and changes IP address. I understand that this seems like horribly manual process and I almost sure I have missed some obvious pitfalls here. Could someone let me know what direction should I go?An articles regarding the subject? This is a brand new setup and we would rather build up basic recovery process and scale it later.I just need to have a right direction to allow for such scalability. Thank you very much in advance!

    Read the article

  • signed applet automatically running as insecure

    - by Terje Dahl
    My application is deployed as a self-signed applet to several thousand users at more than 50 schools across the country (in Norway). The user is presented with the standard Java security warning asking if they will accept the signature. When they do, the applet runs perfectly. However, about half a year ago a group of 7 school, all under a common IT department, stopped getting the security warning. In stead the applet loads and starts running in untrusted mode, without first giving the user an option to accept or reject the signature. The problem is on Windows machines, and only when the machine is connected to the schools network. If they take the same machine home with them, the program functions as it should, with security warnings and everything. I know little about Window systems in general, but I would think it would be some sort of policy-file or something that is loaded when a machine hooks up to/through the schools network. Furthermore, the problem only started occurring in these 7 schools after changes made after a security breach they had a while back. The IT department is stumped. I am stumped. Any thoughts, comments, suggestions?

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • Recommended motherboard with hardware raid for Linux

    - by luison
    Hi. We want to setup an internal office server for testing jobs (LAMP), email and samba. Only about 5-10 users. We are also considering starting to virtualize, initially by a base Ubuntu Server with Xen or VMWare Open Source server. Our current system runs with a Linux Raid which has worked great but it's always been complicated to recover the boot sector when one the drives fail and therefore I would prefer using now a hardware raid instead, but ideally with some kind of software monitoring. For this reason and considering we don't want to spend a fortune a I would appreciate any comments on the following options. Motherboard with RAID with linux support... which could you recommend. Motherboard + Hardware Raid card... Adaptec does not seem to have great Linux suppport. 3Ware seems to have a tc soft controller which we've used on a hosting company, but hard to find here in Spain. HP Proliant type basic server, which? Dell Small Servers... any good for Linux? Thanks in advance for any feedback.

    Read the article

  • Cannot connect to local network shares when connected to VPN. Error: "the user name could not be found"

    - by Nick G
    I keep finding that on our small company LAN (7 users, 3 servers) that some servers keep becoming "not accessible" for the purposes of file sharing. They display the message "\SERVER is not accessible. You might not have permission to use this network resource. The user name could not be found". But I don't know why "the user name could not be found" as all the machines are on the same domain and the PDC and BDC seem to be behaving OK. EDIT: VPN seems to be the cause: It turns out I can see the server if I use the IP address (\\1.2.3.4\ etc) or the FQ active directory name (eg \server.domainname.local) but not if I use the server name on its own or a mapped network drive originally created from the "short" name. Oddly though, my machine has no issue resolving the server's DNS name as I can ping the machine name OK and it immediately comes back with the IP, however nslookup seems to fail. It seems to be a problem with how Windows looks up machine names when connected to VPNs. When I'm connected to a VPN, windows seems to use the DNS assocated with the VPN and not the one on the domain controller. This behavior to me, seems incorrect as surely that would mean connecting to any VPN would break any ability to lookup local machine names for servers and printers etc. So I guess the real question now is, how can I make my machine still search the local Active Directory DNS (the PDC) even when connected to a VPN? More info in my comments below.

    Read the article

  • Can not join additional domain controllers

    - by Hosm
    Hi all, I had a dead PDC and another not so synced domain controller for my domain. using comments here link now the so called secondary domain controller has seized domain controls and I can verify it from dsa.msc that it is a domain controller. I set up another domain controller (win2003SRV) and about to promote an AD on it as a domain controller for my domain. When I try to join the new domain controller to the domain I face DNS problem. here is some more detail DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain DOMNAME.A.B: The query was for the SRV record for _ldap._tcp.dc._msdcs.DOMNAME.A.B The following domain controllers were identified by the query: update.DOMNAME.A.B Common causes of this error include: - Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. For information about correcting this problem, click Help. it is worth noting that update.DOMNAME.A.B is the current domain controller to which I'd like to add another controller named PDC.DOMNAME.A.B Ip address of update.DOMNAME.A.B is 192.168.200.1 and for pdc.DOMNAME.A.B is 192.168.200.100 querying DNS on both machine return correct results. Any idea?

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • What happens to Google contacts in the People app in Windows 8

    - by Klas Mellbourn
    In the People tile in Windows 8, you can connect to your different accounts, e.g. LinkedIn, Facebook, Google contacts. I have a lot of contact information in Google Contacts that I have carefully curated. I also have Facebook and LinkedIn contacts. I have already connected Facebook and LinkedIn contacts to the People app, and it seems to work ok. If I connect my Google Contacts to the People app too, what exactly will happen to the Google contacts? Will my Google contacts be modified in any way by the People app? Merged? Synced? (I understand that they will look merged in the People app, but I am wondering what will happen to the actual Google contacts, which I often use outside the People app) For instance: If a contact is in Facebook but is missing from Google Contacts, will it be created in Google contacts? If there is a picture for a person in both Facebook and Google Contacts, will the Google Contacts picture be overwritten? If I add a field, such as "Comments" to a contact in the People app, will that comment be written to the comment field for that contact in Google Contacts?

    Read the article

  • How do I grant a database role execute permissions on a schema? What am I doing wrong?

    - by Lewray
    I am using SQL Server 2008 Express edition. I have created a Login , User, Role and Schema. I have mapped the user to the login, and assigned the role to the user. The schema contains a number of tables and stored procedures. I would like the Role to have execute permissions on the entire schema. I have tried granting execute permission through management studio and through entering the command in a query window. GRANT EXEC ON SCHEMA::schema_name TO role_name But When I connect to the database using SQL management studio (as the login I have created) firstly I cannot see the stored procedures, but more importantly I get a permission denied error when attempting to run them. The stored procedure in question does nothing except select data from a table within the same schema. I have tried creating the stored procedure with and without the line: WITH EXECUTE AS OWNER This doesn't make any difference. I suspect that I have made an error when creating my schema, or there is an ownership issue somewhere, but I am really struggling to get something working. The only way I have successfully managed to execute the stored procedures is by granting control permissions to the role as well as execute, but I don't believe this is the correct, secure way to proceed. Any suggestions/comments would be really appreciated. Thanks.

    Read the article

  • Leopard Network Shares and browsing are unreliable

    - by EvilChookie
    I have two macs, running Leopard 10.5.8. One is the 13" MBP connected via WiFi, and the other is a 24" 2008 iMac, connected via ethernet. There are at least another 6-10 machines (windows and mac) awake on the network (with shares) at any given time, yet there are plenty of times where I cannot see any devices/shares in either my "Shared" section in Finder, nor can I see any computers in "Network" in Finder. Restarting doesn't help. I've restarted all the networking gear in the house to no avail. Our network is a series of gigabit switches connected to a D-Link gaming router. I believe we use OpenDNS, and our provider is Cox. I hate having to use "Go - Connect to Server" to browse to commonly used file shares (by IP). I'd like to know why my shares do not always and consistently appear in Leopard. Edit: I ran OnyX this morning, and performed the cleaning and maintenance operations (including disk permissions) on both my Macs, and at least one of my macs has started showing network devices again. (the other is still going). No idea how long this will last. Any ideas as to what is causing this issue, and how to prevent it? Edit 2: Aaaand there the shares go again. So running OnyX is not a permanent or reliable fix for this issue. Edit 3: After a clean reinstall and update, network shares are still unreliable. The SMBClient command mentioned in comments shows me the information it's supposed to show, but the shares do not appear in the shared section. They'll also vanish at random and reappear at random throughout the day.

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Ati X1600 driver problem on Mac

    - by Mulot
    Hi all, I currently own a 06' Macbook pro 1.1, and since some months I have recurrent problems of displays bug or artifacts. I searched quickly around to see that a lot of other users on Mac (iMac or Macbook pro) also have the same problem due to a problem for the X1600 video card. Apparently it's due to overheating problem, in my case even without warming a lot I have very bad display bugs such as colorful pixel lines, or glitches, and freeze and crash, all of this on Tiger, Leopard and Snow Leopard. I found this interesting article here talking about this problem and trying to gather people so that Apple take the serious GPU problem in consideration. In one of the comments, an user said he removed all bundle named with "radeon" and then he had no more problems under Leopard, and seems ot work fine well too on Snow Leopard. I did the same thing, I removed the bundles of the driver, restart, and no more problems, but not more 3D acceleration, which is not an acceptable solution. For those interested, here is the list of files to be deleted to stop having this problem. /System/Library/Extensions/ATIRadeonX1000.kext /System/Library/Extensions/ATIRadeonX1000GA.plugin /System/Library/Extensions/ATIRadeonX1000GLDriver.bundle /System/Library/Extensions/ATIRadeonX1000VADriver.bundle /System/Library/Extensions/ATIRadeonX2000.kext /System/Library/Extensions/ATIRadeonX2000GA.plugin /System/Library/Extensions/ATIRadeonX2000GLDriver.bundle /System/Library/Extensions/ATIRadeonX2000VADriver.bundle I would like to know if there is a way to fix this using other drivers if that's possible or by creating a group to force Apple to make a replacement program in place. Edit : How to locate those files on your hard drive if you are not under Snow Tiger : sudo find / -iname "*radeon*"

    Read the article

  • Problem running mysql client, cannot connect to mysql server

    - by ehsanul
    Edit3: Thanks for the help everyone. Sorry for wasting anybody's time, but it seems like a simple reboot solved it. I should've known better, but I just had the assumption that the "restart" solution is mostly valid just for MS Windows (no offense). I'll keep this in mind before I ask a question here again. I installed the mysql-client-5.0 and mysql-server-5.0 packages on Ubuntu 8.04, using sudo apt-get install. When I try to run the "mysql" command, I get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) To verify that mysql server is running, I tried this, and it does seem to be running, with the correct socket too: $ ps aux | grep mysql root 13388 0.0 0.0 1772 528 ? S 06:24 0:00 /bin/sh /usr/bin/mysqld_safe mysql 13553 0.0 1.4 127012 15332 ? Sl 06:25 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 13555 0.0 0.0 3008 696 ? S 06:25 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld ehsanul 16910 0.0 0.0 3092 772 pts/4 R+ 07:17 0:00 grep mysql So I don't understand why I'm getting an error trying to connect to mysql server. Note that I'm completely new to mysql. Edit: As requested in comments, the exact command that is returning the error is simply "sudo mysql". And when I check netstats for active networks services, I do see an entry for port 3306, with Protocol: tcp, IP Source: 127.0.0.1, State: LISTEN Edit2: It appears as if the /var/run/mysqld/mysqld.sock socket doesn't exist (if I'm interpreting the following output correctly): $ ls -al /var/run/mysqld/ total 0 drwxr-xr-x 2 mysql root 40 2009-08-06 06:36 . drwxr-xr-x 20 root root 860 2009-08-06 06:25 ..

    Read the article

  • 4.00gb (3.25gb usable) in Windows 7 x64

    - by dotnetdev
    Hi, I have setup Windows 7 Ultimate x64 on my PC. I have 4gb RAM and my BIOS states the correct amount (4096mb), but I cannot Windows (System Manager) says I have 4.00gb (3.25gb usable). This seems to be a popular issue, and I have looked for an integrated video card (integrated with my chipset) to disable but haven't found anything. What else can be preventing me from seeing all 4gb? When I had Vista 32bit, it would say 3.25gb RAM not 4.00gb (3.25gb usable). I have an x64 CPU and when I brought my RAM, I used a compatibility tool from Crucial (the memory vendor) to test how much memory my PC can support and 4gb was the answer (this was a windows app I think). Chipset is Intel(R) G33/G31/P35/P31 Express Chipset PCI Express In the bios, I looked for an onboard video card (integrated) and there was no such thing, but a couple of other onboard devices. There are also no "Resource Mappings" settings. FURHTER DETAILS: Chipset North Bridge: Intel Bearlake G33 South Bridge: Intel 82801IR ICH9R Maximum Memory Amount 8 GB Graphics Controller Type Intel GMA 3100 (Enabled) I guess the first thing is, how do I disable the graphics controller? EDIT: This thread (http://forums.legitreviews.com/about23417.html) indicates the issue is with memory mapped devices, but someone on this thread says that does not apply to x64. The rest of the comments points to a mobo issue for the guy who started that thread. Thanks

    Read the article

  • Cannot access certain URL on my wireless

    - by dehmann
    Problem: On my wireless network at home, there is one URL that I just cannot access with my browser: http://research.microsoft.com/ I have no problems with the Internet connection otherwise. But on that address I just get The connection was reset The connection to the server was reset while the page was loading. from Firefox. I am using a DSL modem (Westell) and Linksys wireless router (using DHCP). When I use my neighbor's wireless connection I can access the microsoft site without a problem. Additional technical details: But with my connection, here is what I get from nslookup. It is weird: It first cannot find the address, but after I look up another address it can find it: $ nslookup research.microsoft.com ;; connection timed out; no servers could be reached $ nslookup google.com Non-authoritative answer: Name: google.com Address: 72.14.204.104 Name: google.com Address: 72.14.204.147 Name: google.com Address: 72.14.204.99 Name: google.com Address: 72.14.204.103 $ nslookup research.microsoft.com Non-authoritative answer: Name: research.microsoft.com Address: 131.107.65.14 But even after nslookup finds it Firefox still cannot access it. Here is what traceroute says: $ traceroute http://research.microsoft.com/ traceroute: Warning: http://research.microsoft.com/ has multiple addresses; using 8.15.7.117 traceroute to http://research.microsoft.com/ (8.15.7.117), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 4.515 ms 2.760 ms 3.072 ms 2 * * * Traceroute just to the IP: $ traceroute 131.107.65.14 traceroute to 131.107.65.14 (131.107.65.14), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 11.912 ms 2.684 ms 2.808 ms 2 * * * Comparison: Traceroute to google.com IP: $ traceroute 72.14.204.99 traceroute to 72.14.204.99 (72.14.204.99), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 6.428 ms 6.981 ms 117.099 ms 2 * * * Any comments / help?

    Read the article

  • Is one server on a vlan unnecessary?

    - by moomoochoo
    DETAILS I've been researching web hosting solutions in Japan. Based on this question one of the services available seems to be a VLAN. I've read about the advantages of such a system for a large organization, but there doesn't seem to be much information regarding smaller setups. I take that to mean that for one server it is likely to be unnecessary? My concern is that I don't know how many other servers are on the WAN, so regardless of how many servers I use a VLAN might still be a good idea. SERVER INFO One dedicated server would be used. It would not be virtualized. My Research so far Based on comments here, a VLAN would be useful for mitigating these problems. A user on another server could, either mistakenly or maliciously, assign one of your IP addresses to their server, resulting in a "duplicate IP" situation that would cause connectivity issues. A user on another server could poison the arp cache and potentially redirect traffic to snoop on communication intended to/from your server. (later in the discussion this point was said to be unrealistic.) QUESTION Is it worthwhile getting a vlan for one dedicated server? Will it be easier/the same/ harder to manage?

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >