Search Results

Search found 34141 results on 1366 pages for 'even mien'.

Page 10/1366 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Can't create a file even if rights allow and I've relogged in

    - by stiv
    I try to create file in folder with group write access, user tomcat7 is in group. Why isn't it workin? skr@konrad~/data/asu$ sudo -u tomcat7 sh $ whoami tomcat7 $ echo > /home/skr/data/asu/g.gz.index sh: 2: cannot create /home/skr/data/asu/g.gz.index: Permission denied $ ls -la /home/skr/data/asu/ total 18708 drwxrwxr-x 2 skr skr 4096 Sep 29 08:38 . drwxrwxr-x 85 skr skr 4096 Jul 30 00:42 .. $ grep ^skr /etc/group skr:x:1002:tomcat7:mail Tried to logout, but it doesn't help. Any ideas?

    Read the article

  • Windows cmd does not recognize xx.exe even though xx.exe is in system path

    - by July
    I installed 7-zip and add its directory to system path: C:\Program Files\7-Zip to PATH, from windows start-input cmd and press enter, Windows command line is started, i type 7z.exe, then press enter, it just run. However when i start command line this way: cmd.exe /c start {projectDir} /D{driveLetter} cmd /K then i type 7z.exe, it gives me error because it can not find 7z.exe. why and how to fix? P.S. 1. I'm on Win7. 2. For some other apps, the above way did works, that's why i'm so confused how bat works.

    Read the article

  • Cannot acess the new cloned server even after new IP address assignment

    - by tough
    I was able to clone a Ubuntu 10.04 server residing in Cloud. It appeared that I was not getting some IP for the new VM so I followed some of these: # cd /etc/udev/rules.d # cp 70-persistent-net.rules /root/ # rm 70-persistent-net.rules # reboot I didn't follow the later commands as I was unable to see two eth MACs as available in the referenced site. After this I am able to see some the IP for it, and is different form the original IP, I have added new IP to DNS server. Now when I try to access it with its assigned(new) domain it is directed to the old server. I can see both the VMs running with different IP. Where I might have gone wrong, I am new to this admin thing.

    Read the article

  • Shared Files stuck locked even after closing all sessions

    - by Chris S
    We run a business app from a shared network drive (has to be this way). When I go to do updates it complains that files are locked. Generally there are open sessions from people who left their computer on, but with no locks on files; there aren't necessarily always sessions open when it complains about locked files. If I close these sessions they disappear. I say "disappear" because I suspect they're actually hanging open. If I try to restart the Server service, it hangs on stopping. Restarting the whole server (it's a VM) unlocks the files. The Server is a Windows 2008 R2 Ent VM running on Hyper-V; the share is accessed through DFS. Offline Files and caching are disabled (Share and GPO). All clients are Win7. Nothing has SP1 yet. Any ideas on what causes the file locks to hang? Any ideas for a solution other than rebooting the server every time?

    Read the article

  • Emails not sending from outlook / OWA - Not even hitting the mail queue in exchange

    - by webnoob
    We are having an issue this morning where we can receive external emails but cannot send internal or external ones from Outlook or OWA. If I use: Send-MailMessage –From <[email protected]> –To <[email protected]> –Subject “Test #01”-Body “Just a test message.” –SMTPServer <Server-Name> –Credential <domain\user> the email is sent correctly which makes me think there is a connection issue with OWA and Outlook. However, outlook is reporting as Connected with exchange. I have checked the message tracking in exchange tools and emails sent via outlook and OWA do not appear. Nothing has changed on the server on the weekend so I don't really know where to start debugging this issue. We are using Windows SBS 2011. We only have one send connector which isn't using Smart Hosts and is set to use DNS MX records. Use external DNS is not checked and I can ping google.com etc so doesn't appear to be a DNS issue (plus the email sends from the console anyway). EDIT It appears that users using IMAP can send emails correctly, its only ones that rely on the normal exchange connection type that don't work. EDIT Emails from IMAP are hitting the email queue's where as emails from the normal exchange accounts aren't. EDIT It seems that some of the emails we tried to send yesterday sent at about 1am but now it won't work again..

    Read the article

  • Track kids browsing history even when they know how to clear it manually

    - by Darren Newton
    I have a colleague with two teenage boys (yes, cue cliche's about 'I have this friend see...') He's currently having issues with them browsing pr0n and wants to do a little spying on their browsing (I'm staying clear of the philosophies/ethics on this.) The kids are savvy enough to clear their browsing history when they're done. As I'm his goto for IT he has asked me if there is a way to keep a hold of the browsing history. The family uses Macs, and the kids surf with Safari. I know that browsing history is kept here ~/Library/Safari/History.plist. I figure there should be a way to write either an AppleScript or other script (Python/Ruby/Bash) that can backup this file to a different location (/opt/local/history, etc.) Since the kids know to clear their history when they're done should the file be periodically backed up with something similar to a cron job or something like Hazel? While that could work it seems like it would create a ton of little incremental backups. Or is it possible to 'watch' ~/Library/Safari/History.plist and incrementally add changes to a backup file (saving a diff so to speak) but not lose any data? Any ideas/solutions appreciated. UPDATE/EDIT: Got the word from concerned dad that the oldest uses Firefox on a different PC, so the OpenDNS solution (preferably at the router level) is the best answer so far as it would capture usage for the whole house.

    Read the article

  • First request too slow even if I have a load balancer in the back

    - by adrian7
    I have an Apache 2 on Centos + bind with a wordpress website on it (e.g example.com). I have also set up, on another server in a different contry a load balancer (varnish:80 + nginx 127.0.0.1:8080) for it - which task is to server all static content under /wp-content/. Using Simple DNS editor I added an A entry to cdn.example.com pointing to the server's IP. So no extra work from a 2nd dns server. Then using htaccess I redirect all requests to jpg|gif|css|js files to cdn.example.com. That works and all files are saved on the "cdn" server and served right away. My problem is that for the first time I enter on example.com (e.g after restarting the computer or closing the browser) the load time is 1 up to 3 seconds, while any subsequent page loads take only 300 to 600 miliseconds. I know it might be a DNS issue, but I have done a cache check on several websites and cdn.example.com indicates the right IP. Do you have any ideas where I should dig to solve this first-time slowness?

    Read the article

  • CentOS PHP Sessions not working even though the PHP Info page says it is

    - by Blake
    I have PHP installed properly from the Remi repo on CentOS 6 (64 bit). As shown in the image above, the PHP information page shows sessions as working and installed, yet I get this error: Fatal error: Call to undefined function session_create() in /var/www/lighttpd/index.php on line 1 I've tried multiple reinstalls, different PHP RPM's, and yet nothing will get sessions going. How can I get PHP sessions working?

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

  • SSRS Errors "Use Local", even though I am

    - by Corey Coogan
    I am at a loss. I posted this on SO, but think this is probably a better place. I have searched high and low and don't know what to do. I am running SQL Server Web Edition on Server 2008, which only supports local databases. I am trying to connect to localhost, but when I test my connection, I get this error. The feature: "The edition of Reporting Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database." is not supported in this edition of Reporting Services. The DB was upgraded from SQL Express and when I select @@version, it says it's Web Edition. I've tried rebooting and that seemed to fix it, but only for a little while.

    Read the article

  • Windows: disable remote access of local drive, even by domain admin

    - by Matt
    We have a network of Windows 7 PCs that are managed as part of a domain. What we want is for the domain admin to be unable to view the PC's local drive (C:) unless he is physically at the PC. In other words, no remote desktop and no ability to use UNC. In other words, the domain admin should not be allowed to put \\user_pc\c$ in Windows Explorer and see all the files on that computer, unless he is physically present at the PC itself. Edit: to clarify some of the questions/comments that have come up. Yes, I am an admin---but a complete Windows novice. And yes, for the sake of this and my similar questions, it is fair to assume that I am working for someone who is paranoid. I understand the arguments about this being a "social problem versus a technical problem", and "you should be able to trust your admins", etc. But this is the situation in which I find myself. I'm basically new to Windows system administration, but am tasked with creating an environment that is secure by the company owner's definition---and this definition is clearly very different from what most people expect. In short, I understand that this is an unusual request. But I'm hoping there is enough expertise in the ServerFault community to point me in the right direction.

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • Error in New-MailboxExportRequest - "Couldn't find the Enterprise Organizational Container" even though permissions seem right

    - by tacos_tacos_tacos
    When disabling users I typically will be asked to retain a copy of their mailbox. I accomplish this by literally creating their mailbox in Outlook and then exporting to PST. Is there some way around having to do this just to save a mailbox? Edit: I've tried New-MailboxExportRequest but I keep getting the following after providing an alias: Supply values for the following parameters: FilePath: \\localhost\EXPORT_PST\myuser.pst Mailbox: myuser Couldn't find the Enterprise Organization container. <--- the error I've also tried supplying [email protected] as the mailbox as well. Edit 2: I had already seen the post at http://www.mikepfeiffer.net/2010/10/error-couldnt-find-the-enterprise-organization-container-when-creating-a-new-mailbox-export-request/ so I set the permissions as follows below: NTFS permissions Sharing permissions I am still getting that error. Final Solution In Exchange SP2, it does not warn you that you have not set role assignments, it just fails. So be sure to create a management role for "Mailbox Import Export" and add your user to the group, then restart PowerShell for this to take effect.

    Read the article

  • I can't delete a file - even when using unlocker

    - by chipperyman573
    So I have a 64 bit windows 7 ultimate computer. On my desktop, I have a .iso file that should NOT be used by anything. However, when I use lockhunter (unlocker doesn't work for me), it says it can't unlock or delete it. When I check what it's being used by, it says by the system. Here's a picture when I click delete: Here's a picture of when I click unlock it: When I click Other Close locking process: I've had this file on my computer for a LONG time now and haven't been able to delete it. I've also tried eraser erase on startup, but that doesn't work, either. How can I remove this file?

    Read the article

  • Outlook 2010 reminders keep popping up even if closed

    - by LapplandsCohan
    Today when I got to work and started my computers I got a event reminder for my last events last friday. I closed the reminder, but perhaps 10 minutes later it pops up again, and another 10 minutes later it popped up yet again. As the day is passing all my reminders for this day is added to the list of reminders that keep popping up 10 minutes after that they are closed. I've tried both the repair function in Windows Control Panel -- E-mail and the scanpst.exe program, but none of them solved the problem. I've tried outlook.exe /cleansniff and outlook.exe /cleanreminders, but that didn't help either. Reading on the net there seem to be some reports with this for reoccuring calendar events, but my affected events are one-timers. How can I get the closed reminders to stay closed? Update: I noticed that I have 282 drafts of the meetings that I keep getting reminders of.

    Read the article

  • Unable to use autossh in background even with absolute path

    - by Zagorax
    I would love to set autossh to run at boot adding it to /etc/rc.local. This command works: autossh -i /root/.ssh/id_rsa -R 2522:localhost:22 user@address But, if I add the -f option autossh -f -i /root/.ssh/id_rsa -R 2522:localhost:22 user@address The ssh session is not started. As you can see, I'm using an absolute path for my identity file, so this seems to be a different problem from the one stated here: autossh in background does not work From /var/log/syslog: Oct 18 11:08:39 raspberrypi autossh[2417]: starting ssh (count 1) Oct 18 11:08:39 raspberrypi autossh[2417]: ssh child pid is 2418 Oct 18 11:08:39 raspberrypi autossh[2417]: ssh exited with status 0; autossh exiting I'm using it with debian wheezy on a raspberry pi, autossh version 1.4c. Could it be that it's passing the -f option to ssh instead?

    Read the article

  • HP 4530s: Fan is Always On even When Laptop/CPU is Idle

    - by tolitius
    Just bought an HP 4530s from newegg. Laptop is great, but.. The FAN is always on. I did some googling, found it is a known problem, but no easily googlable solution. Tried to: Update BIOS to the latest Disable "CPU fan always on when plugged in" BIOS setting Installed Windows 7 Home (came with), Live CDed Ubuntu, Windows XP Spent 2 hours with horrible HP support Some other things that I can't already recall = spent too much time on it Laptop is not refundable (learned it the hard way, after the fact, by looking at the NEWEGG clever policy that is hidden in "details") I would really appreciate a workable solution / workaround / hack. The laptop is for my friend who will most likely be running Windows XP/7.

    Read the article

  • Wireshark Not Displaying Packets From Other Network Devices, Even in Promisc Mode

    - by eb80
    System Setup: 1. MacBook running Mountain Lion. 2. Wireshark installed and capturing packets (I have "capture all in promiscuous mode" checked) 3. I filter out all packets with my source and destination IP using the following filter ("ip.dst != 192.168.1.104 && ip.src != 192.168.1.104") 4. On the same network as the MacBook, I use an Android device (connecting via WiFi) to make HTTP requests. Expected Results: 1. Wireshark running on the MacBook sees the HTTP request from the Android device. Actual Results: 1. I only see SSDP broadcasts from 192.168.1.1 Question: What do I need to do so that Wireshark, like Firesheep, can see and use the packets (particularly HTTP) from other network devices on the same network?

    Read the article

  • Blue screen while using windows even in safe mode

    - by FATE
    I have this problem with Windows 7 after I detected and cleaned up some viruses with Nod32. problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 1065 Additional information about the problem: BCCode: 50 BCP1: FFFFF900CA578010 BCP2: 0000000000000000 BCP3: FFFFF9600019619B BCP4: 0000000000000002 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\060312-34195-01.dmp C:\Users\Fatemeh\AppData\Local\Temp\WER-51215-0.sysdata.xml How can I prevent these crashes?

    Read the article

  • Windows: make browsers do a DNS-lookup even when the Computer is offline

    - by leosok
    I use a local DNS-Server (MicroDNS) which I set via netsh to redirect any query to my own page. A little webserver running inside my software answering something like "this page is not whitelisted". It works when connected to the Internet but does not work when offline. The Browsers stop looking up the DNS. How could I make Browsers go to my page, whatever I enter in the address line, WHEN OFFLINE?

    Read the article

  • Can't access internet even though everything is working

    - by entity64
    A friend recently upgraded to a new cable internet connection. The modem connects to the router and various PCs and smartphones from her roommates connect to the router. They don't have any problem accessing the internet. She has Windows 8 and can't access any website (via wifi and ethernet). DNS (UDP) is working, DHCP set up everything correctly, Wifi is working, Trace routes and Pings (ICMP) go through with no problem at all. But neither Dropbox nor Skype nor Spotify nor any browser (all TCP) can access any website. The thing is though, she can connect through the university wifi and via a neighbors wifi. It's just her home connection. No firewalls are running and the computer is clean - no malware. How could it be that only her home connection won't work and others do?

    Read the article

  • User receives group membership error to terminal server even though has rights

    - by BlueToast
    http://www.hlrse.net/Qwerty/TSLoginMembership.png To log on to this remote computer, you must be granted the Allow log on through Terminal Services right. By default, members of the Remote Desktop Users group have this right. If you are not a member of the Remote Desktop Users group or another group that has this right, or if the Remote Desktop User group does not have this right, you must be granted this right manually. Only as of today a particular user began receiving this message for a second terminal server they use; otherwise, they have never had any problems authenticating into this server. We have no restrictions on simultaneous and multiple logins. On each terminal server, we have a group and security group like "_Users" locally in the Builtin\Remote Desktop Users group. For this particular user, on this particular terminal server we have locally given him Administrator, Remote Desktop Users, and Users membership; in AD we have given him DOMAIN\Administrator, Builtin\Remote Desktop Users, DOMAIN\_Users. It still gives us that error message. We gave him membership to another terminal server (random) by simply making him member of another DOMAIN\_Users group -- successfully able to login to that random terminal server. So, from scratch we created an AD account 'dummy' (username) with only Domain Users membership. Tried to login to this particular server, no success. So I added 'dummy' to DOMAIN\_Users group, and then was successfully able to login. Other users from this user's department are able to login to this particular server just fine as well. We checked the Security logs on this particular server, and while it is logging everything, the only thing it appears to not log are these failed login attempts from this particular user who receives this error message. We have tried rebooting the server, and the user is still receiving that error message.

    Read the article

  • Sendmail yields "user unkown" errors even after (wrongly) setting up a catch-all account

    - by user59240
    I was trying to follow the instructions found here to set up a catch-all account, but still I get the following message for mails sent to non-existent users: The error that the other server returned was: 550 550 5.1.1 [email protected]... User unknown (state 14). Everything else works, though... /etc/mail/local-host-names and /etc/mail/virtusertable were set up as instructed. Any advice? Thanks!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >