Search Results

Search found 181 results on 8 pages for 'acls'.

Page 2/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • Setting per-directory umask using ACLs

    - by Yarin
    We want to mimic the behavior of a system-wide 002 umask on a certain directory foo, in order to ensure the following result: All sub-directories created underneath foo will have 775 permissions All files created underneath foo and subdirectories will have 664 permissions 1 and 2 will happen for files/dirs created by all users, including root, and all daemons. Assuming that ACL is enabled on our partition, this is the command we've come up with: setfacl -R -d -m mask:002 foo This seems to be working- I'm basically just looking for confirmation. Is this the most effective way to apply a per-directory umask with an ACL?

    Read the article

  • Best practice ACLs to prepare for auditors?

    - by Nic
    An auditor will be visiting our office soon, and they will require read-only access to our data. I have already created a domain user account and placed them into a group called "Auditors". We have a single fileserver (Windows Server 2008) with about ten shared folders. All of the shares are set up to allow full access to authenticated users, and access restrictions are implemented with NTFS ACL's. Most folders allow full access to the "Domain Users" group, but the auditor won't need to make any changes. It takes several hours to update NTFS ACL's since we have about one million files. Here are the options that I am currently considering. Create a "staff" group to assign read/write instead of "Domain Users" at the share level Create a "staff" group to assign read/write instead of "Domain Users" at the NTFS level Deny access to the "Auditors" group at the share level Deny access to the "Auditors" group at the NTFS level Accept the status quo and trust the auditor. I will probably need to configure similar users in the future, as some of our contractors require a domain account but shouldn't be able to modify our client data. Is there a best practice for this?

    Read the article

  • Drupal advanced ACLs for "untrusted" administrators

    - by redShadow
    I have a multi-site Drupal-6 installation containing websites of different customers. On each site, there is an "administrator" role that includes mainly the customer's account. We want to give as many permissions as possible to this privileged user, but this could bring to security leaks using just the Drupal Core permissions management system. The main thing to avoid is the customer account being able to run PHP code on the server (that would be like being logged on the server as the www-data user.. sounds really bad). To avoid that, it is not sufficient to deny PHP code evaluation for the role. Since the administrator role must have permissions to manage users, he could also change the password of the user #1 and login in the site as superadmin. The second goal would be to deny also some "confusing" administrative pages (such as module selection) but not others (such as site informations configuration, or theme selection, etc.) I found the User One module that seems to fix the first problem, but I have no idea on how to solve the second one. I found some modules around, but no-one seems to fit.. it seems like the most ACLs are thought to protect the content, and not the site itself, as if the site administrator would always be the server owner itself..

    Read the article

  • OS X: Finder error -36 when using SMB shares on a Samba server bound to AD

    - by Frenchie
    We're looking at deploying SMB homes on Debian (5.0.3) for our mac clients rather than purchasing four new Xserves. We've got our test servers built and functioning properly. Windows clients behave perfectly, but we've run into an issue with OS X (10.6.x and 10.5.x). We're going this route instead of Windows file servers due to a whole bunch of other issues that arise when going that way. Specifically, when mounting a SMB share with unix extensions switched on and the remote server bound to AD, the finder cannot save files on the share, instead touching the file and then bombing out with a -36 IO error, folder creation is fine. Copying files in the terminal behaves fine and the problem seems to be limited to the finder. The issue arises (I think) as the remote UID/GID is passed across when using unix extensions. OS X uses its own winbind idmap (odsam) to work out the effective UID/GID from AD users and groups whilst we're using a rid map on the server. Consequently, there is a mismatch in ownership which the finder chooses to honour. How OS X appears to handle this is to use the remote uid and gid at the file permission level (see below) and then set an OS X acl granting the local uid/gid to have the appropriate permissions on the file. I think the finder touches the file (which the kernel allows because of the ACL) and then checks the filesystem perms and drops out with the IO error. On a Client fc-003353-d:homes2 root# ls -led test/ drwx------+ 2 135978 100513 16384 Feb 3 15:14 test/ 0: user:jfrench allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit 1: group:ARTS\domain users allow 2: group:everyone allow 3: group:owner allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 4: group:group allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 5: group:everyone allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit We've tried the following without any luck: Setting the Linux side file owner to match the OS X GID/UID Adding ACLs on the linux filesystem which grant the OS X GID/UID perms Disabling extended attributes Setting steams=no in /etc/nsmb.conf on the client We're currently running a workaround which is to just turn off unix extensions which forces the macs to just mount the share as the local user with u=rwx perms. This works for most things but is causing a few apps that expect certain perms to break in subtle ways. Worst case scenario is that we'll continue running in this way but we would like to have the unix extensions on. Regards. Relevant SMB config below: [global] workgroup = ARTS realm = *snip* security = ADS password server = *snip* unix extensions = yes panic action = /usr/share/panic-action %d idmap backend = rid:ARTS=100000-10000000 idmap uid = 100000-10000000 idmap gid = 100000-10000000 winbind enum users = Yes winbind enum groups = Yes veto files = /lost+found/aquota.*/ hide files = /desktop.ini/$RECYCLE.BIN/.*/AppData/Library/ ea support = yes store dos attributes = yes map system = no map archive = no map readonly = no

    Read the article

  • script to find "deny" ACE in ACLs, and remove it

    - by Tom
    On my 100TB cluster, I need to find dirs and files that have a "deny" ACE within their ACL, then remove that ACE on each instance. I'm using the following: # find . -print0 | xargs -0 ls -led | grep deny -B4 and get this output (partial, for example only) -r--rw---- 1 chris GroupOne 4096 Mar 6 18:12 ./directoryA/fileX.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- -r--rwxrwx 1 chris GroupOne 14728221 Mar 6 18:12 ./directoryA/subdirA/fileZ.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- OWNER: user:bob GROUP: group:GroupTwo 0: user:bob allow dir_gen_read,dir_gen_write,dir_gen_execute,std_write_dac,delete_child,object_inherit,container_inherit 1: group:GroupTwo allow std_read_dac,std_write_dac,std_synchronize,dir_read_attr,dir_write_attr,object_inherit,container_inherit 2: group:GroupTwo deny list,add_file,add_subdir,dir_read_ext_attr,dir_write_ext_attr,traverse,delete_child,object_inherit,container_inherit -- As you can see, depending on where the "deny" ACE is, I can see/not-see the path. I could increase the -B value (I've seen up to 8 ACEs on a file) but then I would get more output to distill from... What I need to do next is extract $ACENUMBER and $PATHTOFILE so that I can execute this command: chmod -a# $ACENUMBER $PATHTOFILE Additional issue is that the find command (above) gives a relative path, whereas I need the full path. I guess that would need to be edited somehow. Any guidance on how to accomplish this?

    Read the article

  • Reading Windows ACLs from Java

    - by Matt Sheppard
    From within a Java program, I want to be able to list out the Windows users and groups who have permission to read a given file. Obviously Java has no built-in ability to read the Windows ACL information out, so I'm looking for other solutions. Are there any third party libraries available which can provide direct access to the ACL information for a Windows file? Failing that, maybe running cacls and capturing and then processing the output would be a reasonable temporary solution - Is the output format of cacls thoroughly documented anywhere, and is it likely to change between versions of Windows?

    Read the article

  • How do I copy ACLs on Mac OS X?

    - by MagerValp
    Most unix derivates can copy ACLs from one file to another with: getfacl filename1 | setfacl -f - filename2 Unfortunately Mac OS X does not have the getfacl and setfacl commands, as they have rolled ACL handling into chmod. chmod -E accepts a list of ACLs on stdin, but I haven't found a command that will spit out ACLs in a suitable format on stdout. The best I have come up with is: ls -led filename1 | tail +2 | sed 's/^ *[0-9][0-9]*: *//' | chmod -E filename2 Is there a more robust solution? Bonus question: is there a nice way to do it in Python, without using any modules that aren't shipped with 10.6?

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Ubuntu 12.04 - syslog showing "SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled"

    - by Tom G
    I have been seeing these random logs in syslog on our production system. There is no XFS setup. Fstab only shows local partitions, only EXT3 . There is nothing in crontabs either. The only file system related package I have installed is 'nfs-kernel-server' Kernel version is 3.2.0-31-generic . kernel: [601730.795990] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled kernel: [601730.798710] SGI XFS Quota Management subsystem kernel: [601730.828493] JFS: nTxBlock = 8192, nTxLock = 65536 kernel: [601730.897024] NTFS driver 2.1.30 [Flags: R/O MODULE]. kernel: [601730.964412] QNX4 filesystem 0.2.3 registered. kernel: [601731.035679] Btrfs loaded os-prober: debug: running /usr/lib/os-probes/mounted/10freedos on mounted /dev/vda1 10freedos: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/10qnx on mounted /dev/vda1 10qnx: debug: /dev/vda1 is not a QNX4 partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20macosx on mounted /dev/vda1 macosx-prober: debug: /dev/vda1 is not an HFS+ partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/20microsoft on mounted /dev/vda1 20microsoft: debug: /dev/vda1 is not a MS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/vda1 30utility: debug: /dev/vda1 is not a FAT partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/40lsb on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/70hurd on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/80minix on mounted /dev/vda1 debug: running /usr/lib/os-probes/mounted/83haiku on mounted /dev/vda1 83haiku: debug: /dev/vda1 is not a BeFS partition: exiting os-prober: debug: running /usr/lib/os-probes/mounted/90bsd-distro on mounted /dev/vda1 83haikuos-prober: debug: running /usr/lib/os-probes/mounted/90linux-distro on mounted /dev/vda1 os-prober: debug: running /usr/lib/os-probes/mounted/90solaris on mounted /dev/vda1 os-prober: debug: /dev/vda2: is active swap Why would this randomly show up? This also spawns multiple "jfsCommit" processes.

    Read the article

  • Can you create ACLs with open vSwitch on XenServer 5.6FP1 without using the DVS appliance?

    - by bwizzy
    I have a pool of XenServer hosts running the Free version of XenServer 5.6 FP1. I was wondering if I change the network backend to use Open vSwitch if I can specify ACLs on individual network VIFs without needing to use the DVS appliance (distributed virtual switch) which requires an Advanced License or higher. Basically I'm looking for a way to isolate VMs on my network so that if a user had root access on the command line they couldn't access other servers they should not be able to (without using a VLAN).

    Read the article

  • What filesystem comes closest to matching NTFS for support of ACLs, and highly-granular permissioning?

    - by warren
    It seems that most other filesystems handle the basic *nix permissions (ugo±rwx), with maybe an addition here or there. Or can be "made" to handle ACLs through the use of other tools on top of the system. On the wikipedia pages about filesystems (http://en.wikipedia.org/wiki/List%5Fof%5Ffile%5Fsystems & http://en.wikipedia.org/wiki/Comparison%5Fof%5Ffile%5Fsystems), it appears that while some do support extended meta-data, none support natively the level of permissioning that NTFS does. Am I wrong in this understanding?

    Read the article

  • Is it safe to delete "Account Unknown" entries from Windows ACLs in a domain environment?

    - by Graeme Donaldson
    It's not uncommon to see entries in Windows ACLs (NTFS files/folders, registry, AD objects, etc.) with the name "Account Unknown (SID)". Obviously these are because of old AD users or groups which at some point had permissions manually configured on the relevant object and have since been deleted. Does anyone know if it is safe to remove these "Account Unknown" ACEs? My gut feeling is that it should be just fine, but I'm wondering if anyone has any past experiences where doing this has caused trouble? Normally I just ignore these, but the company I'm working at now seems to have an abnormal number of these, most likely due to past admins' inexperience with AD/Windows and assigning permissions to user accounts rather than groups in all sorts of weird places. FWIW, our environment is not complex, a single domain forest, 4 DCs in 3 sites, with all network connectivity and replication healthy, so I'm certain that these "Account Unknown" entries are really old accounts, and not just because of some failure to resolve the SID to a human-readable name.

    Read the article

  • haproxy: Is there a way to group acls for greater efficiency?

    - by user41356
    I have some logic in a frontend that routes to different backends based on both the host and the url. Logically it looks like this: if hdr(host) ends with 'a.domain.com': if url starts with '/dir1/': use backend domain.com/dir1/ elif url starts with '/dir2/': use backend domain.com/dir2/ # ... else if ladder repeats on different dirs elif hdr(host) ends with 'b.domain.com': # another else if ladder exactly the same as above # ... # ... else if ladder repeats like this on different domains Is there a way to group acls to avoid having to repeatedly check the domain acl? Obviously there needs to be a use backend statement for each possibility, but I don't want to have to check the domain over and over because it's very inefficient. In other words, I want to avoid this: use backend domain.com/url1/ if acl-domain.com and acl-url1 use backend domain.com/url2/ if acl-domain.com and acl-url2 use backend domain.com/url3/ if acl-domain.com and acl-url3 # tons more possibilities below because it has to keep checking acl-domain.com. This is particularly an issue because I have specific rules for subdomains such as a.domain.com and b.domain.com, but I want to fall back on the most common case of *.domain.com. That means every single rule that uses a specific subdomain must be checked prior to *.domain.com which makes it even more inefficient for the common case.

    Read the article

  • Do I have to do anything for file ACLs to work both before and after I reformat a computer?

    - by Zian Choy
    Let's say that I have a computer running Windows Vista with 2 users: Alice and Bob. Alice is the admin and Bob is a normal user. They each have files in their respective My Documents folders and Bob is not allowed to view Alice's files and Alice has to jump through a UAC elevation to view Bob's. If Alice copies all the files on the computer to an external NTFS-formatted hard drive with the following 2 commands: robocopy "E:\Bob's Files" "C:\Users\Bob\My Documents" /MIR robocopy "E:\Alice's Files" "C:\Users\Alice\My Documents" /MIR And then reformats the hard drive, installs a fresh copy of Windows, and creates 2 users named Alice and Bob on the computer, then will everything in the first paragraph be true after Alice copies the files back onto the internal hard drive? Assume that when the files are copied back over, she logs in as Bob and then copies Bob's files and likewise with her own files. Possibly relevant: Alice and Bob also have passwords on their user accounts and they create new passwords after the computer is reformatted. The main post has been tweaked slightly to make the question clearer. Answers that predate April 2011 are referring to an earlier version of this post.

    Read the article

  • How to whitelist external access to an internal webserver via Cisco ACLs?

    - by Josh
    This is our company's internet gateway router. This is what I want to accomplish on our Cisco 2691 router: All employees need to be able to have unrestricted access to the internet (I've blocked facebook with an ACL, but other than that, full access) There is an internal webserver that should be accessible from any internal IP address, but only a select few external IP addresses. Basically, I want to whitelist access from outside the network. I don't have a hardware firewall appliance. Until now, the webserver has not needed to be accessible externally... or in any case, the occasional VPN has sufficed when needed. As such, the following config has been sufficient: access-list 106 deny ip 66.220.144.0 0.0.7.255 any access-list 106 deny ip ... (so on for the Facebook blocking) access-list 106 permit ip any any ! interface FastEthernet0/0 ip address x.x.x.x 255.255.255.248 ip access-group 106 in ip nat outside fa0/0 is the interface with the public IP However, when I add... ip nat inside source static tcp 192.168.0.52 80 x.x.x.x 80 extendable ...in order to forward web traffic to the webserver, that just opens it up entirely. That much makes sense to me. This is where I get stumped though. If I add a line to the ACL to explicitly permit (whitelist) an IP range... something like this: access-list 106 permit tcp x.x.x.x 0.0.255.255 192.168.0.52 0.0.0.0 eq 80 ... how do I then block other external access to the webserver while still maintaining unrestricted internet access for internal employees? I tried removing the access-list 106 permit ip any any. That ended up being a very short-lived config :) Would something like access-list 106 permit ip 192.168.0.0 0.0.0.255 any on an "outside-inbound" work?

    Read the article

  • Is it possible to do DNS-based ACLs on a Cisco ASA?

    - by pickles
    Short of using static IP addresses, is it possible to have a Cisco ASA use a DNS name rather than an IP address? For instance, if I want to limit a host in the DMZ to access only one particular web service, but that web service might be globally load balanced or using DynDNS or cloud, how can the ACL be expressed so that a fixed IP address isn't used and the admin doesn't have to keep opening and closing down IP addresses?

    Read the article

  • OSX Reset Home Permissions and ACLs - how long should it take?

    - by andyface
    Having screwwed up my permissions by trying to have two users accessing one home folder I'm now going through the process of reseting my main user home permissions via the OSX install disk utilities, which seems to be taking a while. Does this process take a while to do, though I assume it depends on how many files I have in the folder, which in my case is a good few 100 GBs. At what point should I be concerned that it may have got stuck and thus reset my computer and try again? I assume, though not sure that if the little circle indicator is still moving then it's not completely frozen, but as there's no progress bar or details I'm not sure how true that is.

    Read the article

  • How can I change ACLs recursively using cacls.exe?

    - by maaartinus
    I want to restrict the access for everything inside the work directory to me and the system only. I tried this with the following command: cacls.exe work /t /p 'PIXLA09\Maaartin:f' 'NT AUTHORITY\SYSTEM':f However it doesn't work at all. The following command should show only the two specified users but instead shows a very long list of permissions: cacls.exe work/somedirectory I tried to use /g instead of /p, too. Since I didn't use /e the permissions shouldn't get edited but replaced. Any ideas what's wrong?

    Read the article

  • Is it logical that file system acls would be corrupted in a way that adds permission for another user?

    - by wilbbe01
    I was having issues on a shared hosting provider with the host's web server instance not serving some files. I asked the companies support about the issue and they responded with the results of getfacl on my home directory, and added the necessary line to allow their web server to obtain the necessary permissions. All is working happily now, but I noticed a line in the getfacl that was for what appeared to be another username to which I had no relation. I asked them about this and their response was that it was likely some minor corruption and that I could remove the unwanted line with the setfacl -x option. I know I never added the user to my home directory, and I also find it weird that that could truly happen due to corruption. So now that it is fixed I'm a little bit weary of whether or not they were trying to cover up a problem they accidentally gave someone permissions to my account, or if this kind of thing can really be corrupted in that way. Especially when that user is a real user on the same server. Any thoughts? Thanks.

    Read the article

  • Access Control Lists for Roles

    - by Kyle Hatlestad
    Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.   The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly. To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.   Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.  Once added, you can assign the roles to your content or folder. If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't! [Extra] Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

    Read the article

  • Is it possible to do DNS-based ACLs on a Cisco ASA?

    - by pickles
    Short of using static IP addresses, is it possible to have a Cisco ASA use a DNS name rather than an IP address? For instance, if I want to limit a host in the DMZ to access only one particular web service, but that web service might be globally load balanced or using DynDNS or cloud, how can the ACL be expressed so that a fixed IP address isn't used and the admin doesn't have to keep opening and closing down IP addresses?

    Read the article

  • SQL not yielding expected results

    - by AnonJr
    I have three tables related to this particular query: Lawson_Employees: LawsonID (pk), LastName, FirstName, AccCode (numeric) Lawson_DeptInfo: AccCode (pk), AccCode2 (don't ask, HR set up), DisplayName tblExpirationDates: EmpID (pk), ACLS (date), EP (date), CPR (date), CPR_Imported (date), PALS (date), Note The goal is to get the data I need to report on all those who have already expired in one or more certification, or are going to expire in the next 90 days. Some important notes: This is being run as part of a vbScript, so the 90-day date is being calculated when the script is run. I'm using 2010-08-31 as a placeholder since its the result at the time this question is being posted. All cards expire at the end of the month. (which is why the above date is for the end of August and not 90 days on the dot) A valid EP card supersedes ACLS certification, but only the latter is required of some employees. (wasn't going to worry about it until I got this question answered, but if I can get the help I'll take it) The CPR column contains the expiration date for the last class they took with us. (NULL if they didn't take any classes with us) The CPR_Imported column contains the expiration date for the last class they took somewhere else. (NULL if they didn't take it elsewhere, and bravo for following policy) The distinction between CPR classes is important for other reports. For purposes of this report, all we really care about is which one is the most current - or at least is currently current. If I have to, I'll ignore ACLS and PALS for the time being as it is non-compliance with CPR training that is the big issue at the moment. (not that the others won't be, but they weren't mentioned in the last meeting...) Here's the query I have so far, which is giving me good data: SELECT iEmp.LawsonID, iEmp.LastName, iEmp.FirstName, dept.AccCode2, dept.DisplayName, Exp.ACLS, Exp.EP, Exp.CPR, Exp.CPR_Imported, Exp.PALS, Exp.Note FROM (Lawson_Employees AS iEmp LEFT JOIN Lawson_DeptInfo AS dept ON dept.AccCode = iEmp.AccCode) LEFT JOIN tblExpirationDates AS Exp ON iEmp.LawsonID = Exp.EmpID WHERE iEmp.CurrentEmp = 1 AND ((Exp.ACLS <= #2010-08-31# AND Exp.ACLS IS NOT NULL) OR (Exp.CPR <= #2010-08-31# AND Exp.CPR_Imported <= #2010-08-31#) OR (Exp.PALS <= #2010-08-31# AND Exp.PALS IS NOT NULL)) ORDER BY dept.AccCode2, iEmp.LastName, iEmp.FirstName; After perusing the result set, I think I'm missing some expiration dates that should be in the result set. Am I missing something? This is the sucky part of being the only developer in the department... no one to ask for a little help.

    Read the article

  • Access Control Lists for Roles

    - by Kyle Hatlestad
    Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.   [Read More]

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >