Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 703/1983 | < Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >

  • noexec option enabled in fstab is not getting applicable for limited user. Is it a bug?

    - by user170918
    noexec option enabled in fstab is not getting applicable for limited user. Is it a bug? cat /etc/fstab # / was on /dev/sda2 during installation UUID=fd7e2645-3cc4-4c6c-8b1b-016711c2fd07 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=f3e58f86-8999-4678-a5ec-0a4b621c6e37 /boot ext4 defaults 0 2 # /home was on /dev/sda9 during installation UUID=bcbc1c4d-46a9-4b2a-bb0a-6fe1bdeaed22 /home ext4 defaults,nodev,nosuid 0 2 # /tmp was on /dev/sda5 during installation UUID=8538eecc-bd16-40fe-ad66-7d7b9287839e /tmp ext4 defaults,noexec,nosuid,nodev 0 2 # /var was on /dev/sda6 during installation UUID=292696cf-fc15-40ab-9cd8-cee9bff7e165 /var ext4 defaults,nosuid,nodev 0 2 # /var/log was on /dev/sda7 during installation UUID=fab1f85b-ae09-4ce0-b169-c01205eb8f9c /var/log ext4 defaults,noexec,nosuid,nodev 0 2 # /var/log/audit was on /dev/sda8 during installation UUID=602f5003-4ac0-49e9-99d3-b29378ce9430 /var/log/audit ext4 defaults,noexec,nosuid,nodev 0 2 # swap was on /dev/sda3 during installation UUID=a538d35b-b2e9-47f2-b72d-5dbbcf0afca0 none swap sw 0 0 /dev/sdb1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 /dev/sdc1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 /dev/sdd1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 sudo users are not able to paste executable files in /bin into the file system which have the noexec option set. But limited users are able to paste the same files into the file system which have noexec option set. Why is it so?

    Read the article

  • Multitenant Self-Service Provisioning Anwendung

    - by Ulrike Schwinn (DBA Community)
    Oracle Multitenant ist eine neue Datenbank-Architektur, die seit Oracle Database Version 12c zur Verfügung steht. Konsolidierungen, schnelles und vereinfachtes Patchen, Provisionieren und Klonen einer Umgebung sind nur einige Vorteile, die sich aus diesem neuen Konzept ergeben können. Die Idee dahinter ist, dass sich mehrere Datenbanken nicht nur Ressourcen wie Hauptspeicher und Hintergrundprozesse teilen, sondern auch eine gemeinsame Verwaltung ("many as one") möglich ist. Oracle Multitenant ist natürlich auch eine ideale Grundlage für "Database as a Service", weil damit eine neue Datenbank in kürzester Zeit erstellt werden. Daher steht seit September auf OTN der Download einer einfachen, intuitiven Weboberfläche für Multitenant Self-Service Provisioning zur Verfügung. Was steckt dahinter? Wie funktioniert die Installation? Welche Anwendungsmöglichkeiten gibt es? Diese Fragen und weitere Informationen sind Thema des aktuellen Tipps. Mehr Informationen dazu gibt es hier.

    Read the article

  • Can I see if and when a file was deleted on Windows Server 2003?

    - by user316687
    On Windows Server 2003, is there a way to see if and when a file was deleted? It's a web server with IIS, our web application let our users to load Word documents into server. However, we found that one Word file is missing, and would like to know is it was deleted or never existed (web app could'nt load it). EDIT: I tried to follow this: Enable auditing the folder you want to keep track of. Just right click on the folder, go to “sharing and security”, then “security” tab, at the bottom click on “advanced”. Select the auditing tab, click add, select the group or users to track, then pick what actions you want to track. To track file deletion you would enable: Create files/Write data Success/Fail Create folders / append data Success/Fail Delete Subfolders/Files Success/Fail Delete Suceess/Fail This one will apply from now on, past actions wouldn't be able to track?

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Structuring Access Control In Hierarchical Object Graph

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. Thoughts I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Protecting a webpage with an authentication form

    - by Luke
    I have created an employee webpage with a lot of company info, links, etc., but I want to protect the page because it contains some confidential company information. I am running IIS7.5 on Windows Server 2008 R2, and I already have the site setup as a normal, non-protected site. I want all active directory users to have access to the site. This is not an intranet site, it is exposed to the internet. I tried setting it up using Windows Authentication, but I had problems with multiple login prompts, etc. I just want a simple form for users to enter their credentials and have access to the site, and I need it to query the AD for login. I've searched the web for a guide on this, but I can't seem to find one that fits my situation. This is not a Web App. It is just a simple html site. Does anyone have any suggestions or a link to a guide on this? Thanks so much! -LB

    Read the article

  • Best CMS for review-type sites

    - by Pru
    Is there an ideal CMS for making a review site? By review site, I mean like a restaurant review site where you have each entry belonging to different major categories like Cuisine and City. Then users can browse and filter by each or by combination (Chinese Food in Los Angeles, with suggestions of other Chinese restaurants in LA, etc). Furthermore, I'd want it to support other fields like price, parking, kid-friendliness, etc. And to have users be able to filter by those criteria. I've been told that with a combination of custom taxonomies, plug-ins and many clever little queries, that Wordpress 3.x can handle this. But I'm having a heck of a time with it getting into the nitty gritty, and that's where I find the community support is lacking. The sort of stuff you'd think would work in WP, like making one parent category for Cuisine and one for City, don't really work once you get further in and start trying to pull it all together. Then you find these blog posts where people say, "This example shows that one could create a huge movie review site using custom taxonomies..." but when you go and try it you hit all sorts of challenges and oddities that point a big long finger at Wordpress being in fact a blogging platform. The best I came up with was one category for the cuisine and one tag for the city, then I created a couple of custom tag-like taxonomies for the other features. It's quite a mess to try to figure out how to assemble all of that into a natural, intuitive site. I expect a few versions down the road WP will be able to do these sorts of sites out of the box. So I thought I'd take a step back before I run back into the Wordpress fray and find out if maybe there is another platform better suited to this sort of relational content site. Directory scripts in some ways offer many of the features I'm looking for, but I need something more flexible and, hopefully, interactive (comments, reviews). I'm especially looking for feedback from people who've crafted sites like this. Thanks!

    Read the article

  • How do I know if I need a layer 3 switch?

    - by eekmeter
    We currently have a flat network with a bunch of unmanaged switches. I would like to use VLANs to segregate certain users like guests and I would like to use 802.1x. However, I'm not sure if what I need is a layer 3 or a level 2 switch. From what I understand a layer 3 switch does routing between VLANs. I don't think I need this at the moment but as I said I'm not sure since this is all new to me. What else would a layer 3 switch do for me? Our network is relatively small, less than a 100 users. What exactly does a layer 3 switch do that I can't get with a layer 2 switch? When would I need a layer 3?

    Read the article

  • 'ACT On' Middleware Consolidation and Innovation Program Launch Webcast Thursday June 5, 2014 - 10am BST / 11am CET / 12am EET

    - by Cinzia Mascanzoni
    We are launching a Demand Generation Program under the Oracle 'ACT On' brand to enable you to help your partners and customers benefit from one integrated Middleware stack to better address today's new IT Challenges. We will work with you to drive demand for your partners to deploy consolidated Middleware environments with one integrated red Stack from Database to Middleware solutions running on engineered systems like Oracle Database Appliance or Exalogic. The opportunity for VADs is to: Build on the strength of FMW which has a significant share of the total Oracle revenue in EMEA Sell more FMW licenses to existing customers Add Systems to deals to grow the value Join us on June 5th, 10am BST / 11am CET / 12am EETFor details on how to join, click here

    Read the article

  • How to restrict deletion of a folder on NTFS share, but still allow modify access within folder

    - by thinkdreams
    I am setting up a set of scan folders from a scanning copier device, and would like to know the best way to protect the folders (for each department) from moving or deletion, but yet still allow access for the users to modify (i.e. create/add/delete) the scanned files within the folder. Structure is: Share Name Departmental Folder User files The writing of the files initially is taken care of by a service account which has full control. We'd just like to ensure the users cannot accidentally delete the folder (which has already happened) containing all the files, etc. This is for a Windows 2003 server, NTFS permissions. Suggestions would be most appreciated.

    Read the article

  • Exchange 07 to 07 mailbox migration using local continuous replication

    - by tacos_tacos_tacos
    I have an existing Exchange Server ex0 and a fresh Exchange Server ex1, both 2007SP3. The servers are in different sites so users cannot access mailboxes on ex1 as from my understanding, a standalone CAS is required for this. I am thinking of doing the following: Enable local continous replication of the storage group on ex0 to a mapped drive that points to the corresponding storage group folder on ex1 At some point when the replication is done (small number of users and volume of mail), say on a late night on the weekend, disable CAS on ex0 (or otherwise redirect requests on the server-side from ex0 to ex1) AND change the public DNS name of the CAS so that it points to ex1. Will my plan work? If not, please explain what I can do to fix it.

    Read the article

  • Is there a proven concept to website reverse certificate authentication?

    - by Tom
    We're looking at exposing some of our internal application data externally via a website. The actual details of the website aren't that interesting, it'll be built using ASP.NET/IIS etc, that might be relevant. With this, I'm essentially I'm looking for a mechanism to authenticate users viewing my website. This sounds trivial, a username/password is typically fine, but I want more. Now I've read plenty about SSL/x.509 to realise that the CA determines that we're alright, and that the user can trust us. But I want to trust the user, I want the user to be rejected if they don't have the correct credentials. I've seen a system for online banking whereby the bank issues a certificate which gets installed on the users' computer (it was actually smartcard based). If the website can't discover/utilise the key-pair then you are immediately rejected! This is brutal, but necessary. Is there a mechanism where I can do the following: Generate a certificate for a user Issue the certificate for them to install, it can be installed on 1 machine If their certificate is not accessible, they are denied all access A standard username/password scheme is then used after that SSL employed using their certificate once they're "in" This really must already exist, please point me in the right direction! Thanks for your help :)

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • Restoring exchange 2003 from a backup

    - by user64204
    Hi all, I'm restoring an Exchange server from a backup: [1] the backup was created on 19/12/2010 [2] the server kept running until 20/12/2010 [3] we're restoring the server today 21/12/2010 with the backup from [1] My understanding is that when the server comes back: [4] whatever is in users' inbox since [1] will be deleted. [5] whatever is in users' sent box since [2] should be re-sent. [6] As a safety measure we've moved all emails sent/received between [1] and [3] to .PST files. Questions: -are [4] & [5] statements correct? -is there any way to move back emails from the PST file [6] to the current inbox/sent folders so that Exchange takes these emails into account (instead of deleting them)? -what happens to the Calendar items that were added after [1], is there any way to back those up as well if needed? Many thanks

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • How to sync (or at least view) public / team / shared calendar to Blackberry using BES?

    - by 3rdparty
    Trying to allow 3 people to view and ideally sync (create/edit) common (team) calendar events via Blackberry and hosted Exchange 2007 BES. My understanding is that BES does not support anything other than a users primary calendar to be synced wirelessly. From what I've researched the only supported workflow is for user to create event in public calendar on Outlook and then invites team members individually as optional attendees so event displays in their calendar (and on their Blackberry). I've seen some 3rd party utilities that claim to support syncing of public folders/calendars: Add2Outlook: http://www.diditbetter.com/add2outlook.aspx WICKSoft: http://www.wicksoft.com/contacts_calendars.htm (needs to be installed on local Exchange server) I've also been told I can sync public/other calendars using Desktop Manager, but I need to avoid any tethered sync with this environment. Am I missing an easier workflow here? There must be tens of thousands of BES users that require the ability to/view share a public, shared or team calendar on their Blackberry. How can I solve this?

    Read the article

  • WiFi problems on several Ubuntu installations

    - by Rickyfresh
    Okay this is the first time I have ever had to ask a question as usually the Ubuntu community have answered everything already but on this occasion there are many people asking for the answer but not one good solution has become available so far so someone please help or I will have to install Windows on my sons and my girlfriends PCs and that would be a disaster as I am trying to help convince people to move from Windows. I installed 12.04 on three computers on the same day. Dell Inspiron (Works Perfect) Toshiba Satellite Home built Desktop The Dell works perfect but the other two either keep losing connection to the wireless Internet and even when they are connected they stop connecting to web sites, for some reason it searches Google fine but will not connect to web sites when a link is clicked. So far people have recommended in other forums: Removing network manager and installing wicd (didn't solve it) Changing the MTU in the wireless settings (didn't solve it) All sorts of messing about with Firefox settings (this doesn't solve it and even if it did this would leave most average PC users scratching their heads and wishing they had stuck to windows) The problem exists on two very different machines and different wireless cards so I doubt its a driver or hardware issue, also many other Ubuntu users are having the same problem with a vast array of different machines and wireless cards. Can someone please give a good solution to this as its going to turn a lot of people away from Ubuntu if they cannot get this sorted. I would give some PC specs but the two machines are vastly different and the other people complaining of this problem also have very different systems all showing the same problem.

    Read the article

  • Amis Oracle OpenWorld Summary presentations

    - by JuergenKress
    AMIS Oracle OpenWorld 2013 Review Part 1 - Intro Overview Innovation, Hardware osvm, IAAS database. Watch the presentation here: AMIS Oracle OpenWorld 2013 Review Part 1 - Intro Overview Innovation, Hardware osvm, IAAS database AMIS Oracle OpenWorld 2013 Review Part 2 - Platform Middleware Publication AMIS Oracle OpenWorld 2013 Review Part 3 - Fusion Middleware WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • An international mobile app - Should I set up EC2 instances in multiple regions?

    - by ashiina
    I am currently trying to launch an mobile app for users around the world. It is not a spectacular launch which will get millions of users in weeks - just another individual developer releasing an app. I know enough about the techniques of managing timezones, internationalizing string, and what not ( the application layer ). But I cannot find any information on how I should manage my EC2 instances... Should I be setting up EC2 instances in different regions around the world? Is that a must-do, or is it an overkill? I'm aware that it's the ideal solution in terms of performance, but it becomes very tough managing servers in multiple regions. DB issues, AMI management, etc... I'd much rather NOT do so. So I would like to know the general best practice when launching an international app/website. Note: For static contents, I know it's better to use a CDN, so I'm planning on doing so.

    Read the article

  • From the Coalface - 2 - Work as hard as you can to be as lazy as you can

    - by TATWORTH
    On one project, the standard build involved building the database from scratch producing a build log that was about 100,000 lines long.  Manually paging throught this log took hours and it was very easy to miss any errors. The solution was to write a filter that read the log file and output a summary file with each output line pepended with the line number in the original file. Sucessive iterations eliminated more noise lines. Diagnostic displays required escaping. The final output was just a few hundred lines long and the errors were easy to spot. The filter program took less time to to write than one manual pass reading the log file.  The utility digested the log file in about 1 second. Now database build errors could be identified quickly instead of after hours of manual examination or lengthy system testing. A few minutes consideration of the task led to a tremendous improvement in quality.

    Read the article

  • How to restrict deletion of a folder on NTFS share, but still allow modify access within folder

    - by thinkdreams
    I am setting up a set of scan folders from a scanning copier device, and would like to know the best way to protect the folders (for each department) from moving or deletion, but yet still allow access for the users to modify (i.e. create/add/delete) the scanned files within the folder. Structure is: Share Name Departmental Folder User files The writing of the files initially is taken care of by a service account which has full control. We'd just like to ensure the users cannot accidentally delete the folder (which has already happened) containing all the files, etc. This is for a Windows 2003 server, NTFS permissions. Suggestions would be most appreciated.

    Read the article

< Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >