Search Results

Search found 40282 results on 1612 pages for 'status access denied'.

Page 73/1612 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • System Centre Essentials 2007 Clients Losing Status

    - by David Collie
    Clients that have been reporting their state fine suddenly start reporting "Windows 0.0" for their OS and "Not yet contacted" for their Last Contacted date even though previously these values were present and correct. Any idea why this might be happening? I've spend a long time playing with wsauclt etc and just when I think I've the problem it comes back again! Thanks.

    Read the article

  • Limited access to Amazon S3 buckets

    - by Tomas Markauskas
    Is it possible to somehow limit the access to an Amazon S3 account. I don't really like the idea of distributing my secret access key to all of my applications, that want to access just a single bucket on my account. If someone gains access to one of the applications, I could loose all my data stored on S3. One way I was thinking to do it would be creating a second S3 account and give it access to just one bucket of the main account, but it's not really a great solution. Another nice thing for me would be to give the secondary account only write (but not modify/delete) and read access. That way I could upload backups or other files and be sure, that they won't get lost.

    Read the article

  • initial Class design: access modifiers and no-arg constructors

    - by yas
    Context: Student working through Class design in personal/side project for Summer. I've never written anything implemented by others or had to maintain code. Trying to maximize encapsulation and imagining what would make code easy to maintain. Concept: Tight/Loose Class design where Tight and Loose refer to access modifiers and constructors. Tight: initially, everything, including setters, is private and a no-arg constructor is not provided (only a full constructor). Loose: not Tight Exceptions: the obvious like toString Reasoning: If code, at the very beginning, is tight, then it should be guaranteed that changes, with respect to access/creation, should never damage existing implementations. The loosening of code happens incrementally and must be thought through, justified, and safe (validated). Benefit: Existing implementing code should not break if changes are made later. Cost: Takes more time to create. Since this is my own thinking, I hope to get feedback as to whether I should push to work this way. Good idea or bad idea?

    Read the article

  • Sharing samba-folder with root access

    - by Industrial
    Hi everyone, I have a staging server in my network running Ubuntu server 10.10, being my main development area. As I need to access the files in the Apache root from other computers in the network, I have setup samba with the following settings: [www] comment = Apache root www path = /var/www writable = yes force user = root force group = root On the host computer, running Ubuntu 10.10 desktop, I am trying to mount the drive with a bash file looking like below: #!/bin/bash sudo mount -t cifs //192.168.1.5/www /media/www/ -o username=myusername,password=mypassword,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 What happens is that I get mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) thrown in my face whilst trying to execute the mount. I've done exactly the same, with exactly the same smb.conf & mount-bash file on another computer in my network, but this just wont work. What am I doing wrong? I am running out of ideas.

    Read the article

  • Creating self-signed SSL certificate - Access denied?

    - by Shaul
    I'm trying to create a Self-Signed Certificate in IIS 7 (Win7 Ultimate x64), and getting the following error: I found this question on SF, which says I should set permissions on the C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys folder to allow rights - but that's also not working. Firstly, note that "Everyone" has "Full Control" rights: And when I try to delete and recreate rights, look what comes up: I am logged in as a user with admin privileges, and I've even tried running Explorer with Admin rights... nothing seems to help. What do I do to get this right?

    Read the article

  • Update in Certification Exam Score Report Access Process!

    - by Richard Lefebvre
    Please note that exam results for all Oracle Certification exams will be accessed through CertView, starting October 30th, 2012. Exam results will no longer be available at the test center, or on the Pearson VUE website. Candidates will receive an email from Oracle within 30 minutes of completing the exam to let them know that their exam results are available on CertView. Candidates must have an Oracle Web Account to access CertView. This new process applies to exam results for all Oracle Certification exams - proctored and non-proctored as well beta exams. CertView, Oracle's self-service certification portal will be the partners’ one stop source for all their certification and exam history! Other benefits of this change include: driving all candidates to have an Oracle Web Account which will lead to tighter integration with Oracle University records in the future, increased security around data privacy and a higher validity rate for candidate email addresses. Existing benefits of CertView include, self-service access to exam and certification records and logos, and access to Oracle's self service certification verification. Accessing Exam Results  Returning CertView Users ·         Click the link in the email sent by Oracle or go to certview.oracle.com ·         Select the See My New Exam Results Now link to view exam results ·         Select the Print My New Exam Results Now link to print exam results  New CertView Users - Who Have An Oracle Web Account ·         First time Users must authenticate their CertView account ·         Account Authentication requires the Oracle Testing ID and email address from your Pearson VUE profile ·         Click the link in the email sent by Oracle or go to certview.oracle.com and follow the Authenticate My CertView Account link.  New CertView Users - Who Do Not Have An Oracle Web Account ·         CertView users are required to have an Oracle Web Account ·         To create an Oracle Web Account, go to certview.oracle.com and select theCreate My Oracle Web Account Now link. Then follow the remaining instructions under I do not have an Oracle Web Account on that page.

    Read the article

  • Interface to collect successful remote backups status

    - by Aseques
    I would like to deploy into our infrastructure a web interface that could register when the copies are finished and if for some reason they haven't. The current issue is that we are doing on site backups for customers, for each backup a mail is sent ad the end of the backup, the problems is that sometimes the mail isn't sent for a variety of reasons: System doesn't have internet Backup system crashed before sending the mail etc.. What I'd like to do is to have a web interface that the backup software cant visit after doing the backup (either if it's a success or a fail), that acknowledges that the backup has finished, after some time, I'd like to receive a report of the machines that hadn't done the backup. Is there anything remotely similar to this that I could use/adapt to our environment? UPDATE: Just found out this (paessler.com) that seems to be a privative solution of what I intended.

    Read the article

  • Sudo Non-Password access to /sys/power/state

    - by John
    On my computer, pm-hibernate appears to be broken, however using the command echo disk > /sys/power/state appears to work perfectly. Now I just need regular user access to it, using sudo. How do I do this? The command sudo echo disk > /sys/power/state simply returns bash: /sys/power/state: Permission denied. Also, I need this in a regularly used script, how can I make it so that I don't have to type in my password for it to work???

    Read the article

  • Postfix sasl: Relay access Denied (state 14)

    - by Primoz
    I have postfix installed with dovecot. There are no problems when I'm trying to send e-mails from my server, however all e-mails that are coming in are rejected. My main.cf file: queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix mail_owner = postfix inet_interfaces = all mydestination = localhost, $mydomain, /etc/postfix/domains/domains virtual_maps = hash:/etc/postfix/domains/addresses unknown_local_recipient_reject_code = 550 mynetworks = 127.0.0.0/8 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases home_mailbox = Maildir/ debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.3.3/samples readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_recipient_restrictions = check_policy_service inet:127.0.0.1:9999, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, smtpd_sender_restriction = reject_non_fqdn_sender broken_sasl_auth_clients = yes UPDATE: Now, when e-mail comes to the server, the server tries to reroute the mail. Example, if the message was sent to [email protected], my server changes that to [email protected] and then the mail bounces because there's no such domain on my server.

    Read the article

  • Restricting access to sites

    - by Paul
    I'm having some problems configuring my local proxy server so that it would restrict access to certain websites. The proxy server I'm using is Squid; I edited its configuration file found in /etc/squid/squid.conf to include the following: acl wikipedia dstdomain .wikipedia.org http_access deny wikipedia I tried to redirect elinks to use Squid. According to Squid's config file, it listens to port 3128, so in /etc/elinks/elinks.conf I added the following: set protocol.http.proxy.host = "localhost:3128" I also restarted Squid with sudo /etc/init.d/squid restart, but I can still access the banned websites using Elinks. What did I do wrong?

    Read the article

  • Access Control Service V2 and Facebook Integration

    - by Your DisplayName here!
    I haven’t been blogging about ACS2 in the past because it was not released and I was kinda busy with other stuff. Needless to say I spent quite some time with ACS2 already (both in customer situations as well as in the classroom and at conferences). ACS2 rocks! It’s IMHO the most interesting and useful (and most unique) part of the whole Azure offering! For my talk at VSLive yesterday, I played a little with the Facebook integration. See Steve’s post on the general setup. One claim that you get back from Facebook is an access token. This token can be used to directly talk to Facebook and query additional properties about the user. Which properties you have access to depends on which authorization your Facebook app requests. You can specify this in the identity provider registration page for Facebook in ACS2. In my example I added access to the home town property of the user. Once you have the access token from ACS you can use e.g. the Facebook SDK from Codeplex (also available via NuGet) to talk to the Facebook API. In my sample I used the WIF ClaimsAuthenticationManager to add the additional home town claim. This is not necessarily how you would do it in a “real” app. Depends ;) The code looks like this (sample code!): public class ClaimsTransformer : ClaimsAuthenticationManager {     public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal)     {         if (!incomingPrincipal.Identity.IsAuthenticated)         {             return base.Authenticate(resourceName, incomingPrincipal);         }         string accessToken;         if (incomingPrincipal.TryGetClaimValue( "http://www.facebook.com/claims/AccessToken", out accessToken))         {             try             {                 var home = GetFacebookHometown(accessToken);                 if (!string.IsNullOrWhiteSpace(home))                 {                     incomingPrincipal.Identities[0].Claims.Add( new Claim("http://www.facebook.com/claims/HomeTown", home));                 }             }             catch { }         }         return incomingPrincipal;     }      private string GetFacebookHometown(string token)     {         var client = new FacebookClient(token);         dynamic parameters = new ExpandoObject();         parameters.fields = "hometown";         dynamic result = client.Get("me", parameters);         return result.hometown.name;     } }  

    Read the article

  • Writing a Data Access Layer (DAL) for SQL Server

    In this tip, I am going to show you how you can create a Data Access Layer (to store, retrieve and manage data in relational database) in ADO .NET. I will show how you can make it data provider independent, so that you don't have to re-write your data access layer if the data storage source changes and also you can reuse it in other applications that you develop. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Simple step by step process to import MS Access data into SQL Server using SSIS

    Sometimes we need to import information from MS Access. We could use the Microsoft SQL Server Migration Assistant, but sometimes we need to add custom transformations and it is necessary to use more sophisticated tools. In this tip, we are going to walk through step by step how to migrate a MS Access table to SQL Server using SQL Server Integration Services (SSIS). What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • How to access localhost remotely - Wordpress?

    - by Marcappuccino
    I have installed a LAMP stack (sudo tasksel install lamp-server) and wordpress (sudo apt-get install wordpress), but now, I would like to access my server remotely, to be used as a home fileserver. For example, my public ip is 82.16.xxx.xxx. Opening it in firefox with the suffix :8080 brings up my router config page..? Do I have to set up port forwarding? BTW. Accessing via localhost/wordpress/ works fine, I would just like to access my files while away from home.

    Read the article

  • Get WebDav uploading progress and status (Linux, davfs2)

    - by Hnatt
    I am using WebDAV on Linux box with davfs2 1.4.6. When I copy a file to a mounted WebDAV service, it is goes rather fast, just like a regular local drive operation. And it actually is, because the file is first copied to ~/.davf2/cache directory. But how do I know that uploading is finished and where do I see current progress? Is there a way to know that uploading failed due to lack of space or file size restrictions?

    Read the article

  • No other users can access external hdd since upgrade to 12.10

    - by Victor9098
    Since upgrading to Ubuntu 12.10 no other user can access the external hdd. This is awkward as its a family pc and we use the hdd to store music and save backups across the several accounts. The external hdd seems to mount just to my account now, i.e. /media/[user1]/[ext hdd], and while all the other users can see the drive mounted they can not access as they just receive a file location error. From their perspective it is mounted just in my profile and not in theirs. I have tried editing the properties of the hdd to allow others to view and create files on the hdd but that has not changed anything. I have also read that this is a new feature to Ubuntu 12.10, the way it mounts via /media/[user]/. So is there a way to have it mount to all the other user accounts too? Thanks!

    Read the article

  • Delivery Status Notification (Relay) in Exchange Server 2007 with original email attachment

    - by Nick Kavadias
    I have recently setup Exhchange Server 2007. The server is smarthosting outgoing messages. Users have 'request delivery receipt' on by default their 'auditing' purposes in Outlook. They would like the original email attached to the delivery notification as was the case in Exchange Server 2003. I need this same functionality in 2007. The question has been asked here, here and here but cannot find a valid solution. Here's some information about the functionality in Exchange 2003. The question is, can i replication this functionality in 2007? Here is what a 2007 delivery message looks like: I know it's possible to customize DSN's. Can I make a custom DSN for this type of message and have the original included as an attachment? Anyone got any other ideas?

    Read the article

  • Can't access some websites using Ubuntu 13.10

    - by Adame Doe
    Something's wrong with Ubuntu. Since I've upgraded to 13.10, I can't access some websites for no apparent reason. I've tried everything imaginable to solve this problem : Made sure that MTUs are the same, Disabled IPv6 in both the network manager and used browsers, Deactivated my network keys, DMZed my computer, Used other DNS like Google and OpenDNS, Checked that no firewall was running my computer ... And it's the same result. I even tried to reinstall Ubuntu a couple of times, but no luck. The most annoying thing about it is I can't access wordpress.org! So, there's no way it could be an ISP restriction of some kind. When I use a VPN, I can access pretty much anything. I'm really frustrated because I have to use wordpress.org very often. Any clue? ifconfig adame@adame-ws:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:3d:b0:7c inet addr:10.42.0.1 Bcast:10.42.0.255 Mask:255.255.255.0 inet6 addr: fe80::226:18ff:fe3d:b07c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8024 errors:0 dropped:0 overruns:0 frame:0 TX packets:7966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:684480 (684.4 KB) TX bytes:616608 (616.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8222 errors:0 dropped:0 overruns:0 frame:0 TX packets:8222 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:568269 (568.2 KB) TX bytes:568269 (568.2 KB) wlan0 Link encap:Ethernet HWaddr 00:19:70:40:85:eb inet addr:192.168.2.3 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::219:70ff:fe40:85eb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1464 Metric:1 RX packets:123705 errors:0 dropped:0 overruns:0 frame:0 TX packets:98141 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:94963545 (94.9 MB) TX bytes:10387470 (10.3 MB) /etc/hosts 127.0.0.1 localhost 127.0.1.1 adame-ws ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters tracepath wordpress.org 1: adame-ws.local 0.092ms pmtu 1500 1: 192.168.2.1 1.300ms asymm 2 1: 192.168.2.1 1.060ms asymm 2 2: no reply 3: no reply 4: no reply 5: no reply 6: no reply 7: no reply 8: no reply ... keep on going like that ping wordpress.org adame@adame-ws:~$ ping wordpress.org PING wordpress.org (66.155.40.250) 56(84) bytes of data. --- wordpress.org ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 9071ms

    Read the article

  • Windows server 2008 - Access Based Enumeration (ABE) not working correctly

    - by Napster100
    I have a folder shared with permissions of only one user account, admin account and admin group having access to it, but when I open the shared area from a second user account which dose not have access to it, the folder is still visible to the second account despite ABE being enabled on it and all other parent directories/folders and even the the drive. The user can't access the shared folder (which is what I want), but I'd like the folder to also be invisible to that user, just to make it look cleaner and theirs no confusion between what they can access and what they cannot. How would I stop the folder appearing for users who don't have permissions to use it? Thanks in advanced. EDIT: I've just added the second user account to the permissions list but denied it access so that the account definitely has no permissions to access it in any way but that's still not hiding it.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >