Search Results

Search found 582 results on 24 pages for 'integrity'.

Page 11/24 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Is 'Old-School' the Wrong Way to Describe Reliable Security?

    - by rickramsey
    source The Hotel Toronto apparently knows how to secure its environment. "Built directly into the bedrock in 1913, the vault features an incredible 4-foot thick steel door that weighs 40 tonnes, yet can nonetheless be moved with a single finger. During construction, the gargantuan door was hauled up Yonge Street from the harbour by a team of 18 horses. " 1913. Those were the days. Sysadmins had to be strong as bulls and willing to shovel horse maneur. At least nowadays you don't have to be that strong. And, if you happen to be trying to secure your Oracle Linux environment, you may be able to avoid the shoveling, as well. Provided you know the tricks of the trade contained in these two recently published articles. Tips for Hardening an Oracle Linux Server General strategies for hardening an Oracle Linux server. Oracle Linux comes "secure by default," but the actions you take when deploying the server can increase or decrease its security. How to minimize active services, lock down network services, and many other tips. By Ginny Henningsen, James Morris and Lenz Grimmer. Tips for Securing an Oracle Linux Environment System logging with logwatch and process accounting with psacct can help detect intrusion attempts and determine whether a system has been compromised. So can using the RPM package manager to verifying the integrity of installed software. These and other tools are described in this second article, which takes a wider perspective and gives you tips for securing your entire Oracle Linux environment. Also by the crack team of Ginny Henningsen, James Morris and Lenz Grimmer. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Why Both 8GB USB Flash Drives Have Different Integrities?

    - by Boris_yo
    USB 3.0 SuperTalent Express DUO 8GB recently had its partition corrupted and declared itself "write-protected" and I was told in chat by @sidran32 that this usually means that flash drive gone bad due to writing cycles limit being reached. Having this thumbdrive for over a year being used infrequently, I was in doubt and referred to SuperTalent's support. I was given recovery tool which I executed but it failed first time prompting me to reinsert it. After that, I formatted it with Windows 7 integrated format utility since recovery tool offered to do this as well which was successful. The problem as I have noticed is with integrity of SuperTalent: Compare above to SanDisk's Micro Cruzer 8GB: Am I missing something? Both thumbdrives are of 8GB and have same FAT32 file system.

    Read the article

  • LobsterPot Solutions in the USA

    - by Rob Farley
    We’re expanding! I’m thrilled to announce that Microsoft Gold Partner LobsterPot Solutions has started another branch appointing the amazing Ted Krueger (5-time SQL MVP awardee) as the US lead. Ted is well-known in the SQL Server world, having written books on indexing, consulting and on being a DBA (not to mention contributing chapters to both MVP Deep Dives books). He is an expert on replication and high availability, and strong in the Business Intelligence space – vast experience which is both broad and deep. Ted is based in the south east corner of Wisconsin, just north of Chicago. He has been a consultant for eons and has helped many clients with their projects and problems, taking the role as both technical lead and consulting lead. He is also tireless in supporting and developing the SQL Server community, presenting at conferences across America, and helping people through his blog, Twitter and more. Despite all this – it’s neither his technical excellence with SQL Server nor his consulting skill that made me want him to lead LobsterPot’s US venture. I wanted Ted because of his values. In the time I’ve known Ted, I’ve found his integrity to be excellent, and found him to be morally beyond reproach. This is the biggest priority I have when finding people to represent the LobsterPot brand. I have no qualms in recommending Ted’s character or work ethic. It’s not just my thoughts on him – all my trusted friends that know Ted agree about this. So last week, LobsterPot Solutions LLC was formed in the United States, and in a couple of weeks, we will be open for business! LobsterPot Solutions can be contacted via email at [email protected], on the web at either www.lobsterpot.com.au or www.lobsterpotsolutions.com, and on Twitter as @lobsterpot_au and @lobsterpot_us. Ted Kruger blogs at LessThanDot, and can also be found on Twitter and LinkedIn. This post is cross-posted from http://lobsterpotsolutions.com/lobsterpot-solutions-in-the-usa

    Read the article

  • Receiving Event ID: 10107, Hyper-V -VMMS

    - by Stargaten
    We are using physical disk on two of Guest operating systems. Is this a know issue? Do we need to have DPM 2010? "One or more physical disks are attached to virtual machine 'Myserver'. Back up programs that use the Hyper-V VSS writer cannot back up volumes that are attached to virtual machines as physical disks. To avoid potential data loss, use another method to back up the data on the physical disks. If you restore the data on this virtual machine, make sure to check the data of the physical disk for integrity. (Virtual machine ID 8EF3C0CB-967D-4D67-B4D8-7B782C7AC07C)"

    Read the article

  • Partner Spotlight

    - by rituchhibber
    FADATA   Fadata officially became a WebCenter Content Specialized partner in the Adriatic region upon the successful completion of the corresponding Oracle specialization tests. ''Being recognized by Oracle and customers will greatly help our company in maintaining a high level of implementation services related to Oracle Web Center Content, as one of strategic products we are focused on. This certification, that our team is very proud of, will certainly help our company FADATA to gain additional advantage, competitiveness, and integrity in implementing Oracle Web Center Content solutions, both on current and future projects in the region'' according to Velimir Corovic and Marjan Nikolic from Fadata. Please put link www.fadata.bg, under Fadata Please also include logo after 1st sentence FISHBOWL SOLUTIONS Google Search Appliance Connector for Oracle WebCenter Content The Google Search Appliance (GSA) provides fast search for your intranet or website. Fishbowl Solutions provides a connector for the Google Search Appliance to integrate it with the Oracle WebCenter Content Server while retaining the security benefits of Oracle WebCenter Content. For more information and real customer example click here Fairview Health Services Case Study or Webinar recording SIGNUM Signum TTE, from Turkey and a Gold member of Oracle® PartnerNetwork (OPN), recently announced it has achieved OPN Specialized status for Oracle WebCenter Portal. Signum TTE which began operations in the IT sector in 2005, is an innovative software solution house focusing on two main issues. Signum TTE presents services and solutions on Oracle Middleware products and technologies and its own product "WinDesk Service Management".

    Read the article

  • Mexico leading in Business Transformation Strategies:

    - by [email protected]
    By [email protected] on April 15, 2010 8:31 AM By John Burke Group Vice President Oracle Applications Business Unit I recently completed a business tour in Mexico, and was surprised by both the economic vibrancy of the country and the thought leadership expressed by many of the customers I met. An example of the economic vibrancy of the country: across the street from my hotel was the local Bentley dealership, Coach Store, Yves Saint Laurent and of course a Starbucks. I only made it to Starbucks. Both the Coach Store and YSL had a line of folks waiting to get in... As for thought leadership, there were several illustrations only on the first day. I had the opportunity to meet with a branch of the Mexican Federal Government. Their questions were not about clerical task automation, far from it! We discussed citizen on-line access to fees and services - for example looking up the duty on an international goods shipment, or tracking that my taxes have been received, or the status of my request for a certain service. Eligibility, policies and status. Having an integrated rules or policy automation system that would allow businesses and citizens to access accurate information and ensure the proper collection of fees and payment for 3rd party provided services. Then in the afternoon, I met with the owner of a roofing company (note: most roofs in Mexico are flat and made of cement). This CEO started discussing how he wanted to transform his business from a cement products company to a service company and market 5-10-15 year service contracts which would guarantee the structural integrity of the roof and of course that the roof would remain waterproof. Although his products were guaranteed, they required an annual inspection and most home owners never schedule that inspection until it is too late and water damage has occurred. These emergency calls reduce his margin and reduce customer satisfaction. This lead to a discussion of business models in general and why long term differentiation can only come from service, not just for the music or news industries, but also for roofing companies! I completely agreed with the transformational concepts described in both meetings and quickly understood why there is a Bentley dealership near my hotel.

    Read the article

  • Error building main Guest Additions Module while installing VirtualBox guest additions

    - by Praveen Sripati
    I have installed Ubuntu 12.10 Guest on Ubuntu 12.04 Host using VirtualBox. Everything is from repository and no direct install. When I install the guest additions, the below error is shown in the console. Before running the command I mapped the VBoxGuestAdditions.iso in the Guest. The closest I could get is this article which says to install the latest version of VirtualBox (not the one from the repository). Is there any alternate solution? sudo ./VBoxLinuxAdditions.run Verifying archive integrity... All good. Uncompressing VirtualBox 4.1.12 Guest Additions for Linux......... VirtualBox Guest Additions installer Removing installed version 4.1.12 of VirtualBox Guest Additions... Removing existing VirtualBox DKMS kernel modules ...done. Removing existing VirtualBox non-DKMS kernel modules ...done. Building the VirtualBox Guest Additions kernel modules The headers for the current running kernel were not found. If the following module compilation fails then this could be the reason. Building the main Guest Additions module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Doing non-kernel setup of the Guest Additions ...done. Installing the Window System drivers Warning: unknown version of the X Window System installed. Not installing X Window System drivers. Installing modules ...done. Installing graphics libraries and desktop services components ...done.

    Read the article

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How can I get Transaction Logs to auto-truncate after Backup

    - by Yaakov Ellis
    Setup: Sql Server 2008 R2, databases set up with Full recovery mode. I have set up a maintenance plan that backs up the transaction logs for a number of databases on the server. It is set to create backup files in sub-directories for each database, verify backup integrity is turned on, and backup compression is used. The job is set to run once every 2 hours during business hours (8am-6pm). I have tested the job and it runs fine, creates the log backup files as it should. However, from what I have read, once the transaction log is backed up, it should be ok to truncate the transaction log. I do not see any option for doing this in the Sql Server Maintenance Plan designer. How can I set this up?

    Read the article

  • How do I get my Lexmark x4650 printer working?

    - by Fallen Dohingy
    I think that my printer stopped working with the switch to gnome 3 or unity. Yes I have tried 32 and 64 bit os's. Here is the driver In order to actually install the driver, you need to extract it and then open up terminal and type sudo and then a space. Then drag the script into the terminal window. Here is what it said in the diver install window: Extracting file: printdriver.te Extracting file: lexmark-08z-series-driver-1.0-1.i386.deb Extracting file: launcher.c Extracting file: launcherfallendohingy@Ubuntu-Inspiron-15R:~$ sudo '/home/fallendohingy/Downloads/lexmark-08z-series-driver-1.0-1.i386.deb.sh' [sudo] password for fallendohingy: Verifying archive integrity... All good. Uncompressing nixstaller.............................................................. Collecting info for this system... Operating system: linux CPU Arch: x86_64 Warning: No installer for "x86_64" found, defaulting to x86... TRACKING IDENT = 170209 cpu speed = 2394 MHz ram size = 3762.69921875 MB hd avail = 74348 MB (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory (gtk:17645): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file '/usr/lib/i386-linux-gnu/gdk-pixbuf-2.0/2.10.0/loaders.cache': No such file or directory /usr/lib/gio/modules/libgvfsdbus.so: wrong ELF class: ELFCLASS64 Failed to load module: /usr/lib/gio/modules/libgvfsdbus.so Extracting file: lsbrowser Extracting file: lsusbdevice Using dpkg installation ============================= Execute: dpkg -i --force-architecture lexmark-08z-series-driver-1.0-1.i386.deb > /tmp/selfgz17540/pkg/files/dpkg_msgs ============================= ============================= Execute: rm lexmark-08z-series-driver-1.0-1.i386.deb ============================= ============================= Execute: /sbin/udevadm control --reload-rules ============================= Successfully installed the .deb Lexmark drivers.

    Read the article

  • SSW Scrum Rule: Do you know to use clear task descriptions?

    - by Martin Hinshelwood
    When you create tasks in Scrum you are doing this within a time box and you tend to add only the information you need to remember what the task is. And the entire Team was at the meeting and were involved in the discussions around the task, so why do you need more? Once you have accepted a task you should then add as much information as possible so that anyone can pick up that task; what if your numbers come up? Will you be into work the next day? Figure: What if your numbers come up in the lottery? What if the Team runs a syndicate and all your numbers come up? The point is that anything can happen and you need to protect the integrity of the project, the company and the Customer. Add as much information to the task as you think is necessary for anyone to work on the task. If you need to add rich text and images you can do this by attaching an email to the task.   Figure: Bad example, there is not enough information for a non team member to complete this task Figure: Julie provided a lot more information and another team should be able to pick this up. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS.   Technorati Tags: Scrum,SSW Rules,TFS 2010

    Read the article

  • "Access Denied" error when starting Windows Security Center service

    - by Isxek
    I am working on a laptop with Windows 7 Ultimate (32-bit) which had previous issues with Microsoft Security Essentials. I've removed the previous installation of Security Essentials and reinstalled it. There's no problem with the said antivirus now, but after a couple of days it was brought back to me because of the error about Windows Security Center service not being started. I've tried setting it to start Automatically instead of "Delayed Start", but I still keep getting "Error 5: Access is Denied." I've searched other possible solutions but it's mostly been either what I did already or "Don't worry about it." Any ideas? Thanks in advance! EDIT: I've scanned the system with both Malwarebytes AM and SUPERAntiSpyware and have found no traces of anything. EDIT2: I have also tried running sfc /scannow to see if the files might be damaged. Got the message no integrity violations were found, however.

    Read the article

  • Vantec NexStar NAS Enclosures - Writing large files

    - by peter
    I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS device. When I write a file to the device using eSata, or a SMB share I cannot write files over 4GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Having trouble installing guest additions for a Xubuntu 13.10 guest OS in Virtualbox 4.2.10

    - by Duval Pearson III
    I am using Ubuntu 13.04 64-BIT as my host operating system running Virtual Box 4.2.10. I get this message when I tell virtualbox to install guest additions(CTRL+D),mounting the volume in the guest OS and run the VBoxLinuxAdditions.run file using root by: sudo ./VBoxLinuxAdditions.run It starts and then comes to these error messages: Verifying archive integrity... All good. Uncompressing VirtualBox 4.2.10 Guest Additions for Linux.......... VirtualBox Guest Additions installer Removing existing VirtualBox non-DKMS kernel modules ...done. Building the VirtualBox Guest Additions kernel modules The headers for the current running kernel were not found. If the following module compilation fails then this could be the reason. Building the main Guest Additions module ...done. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Doing non-kernel setup of the Guest Additions ...done. Installing the Window System drivers Warning: unknown version of the X Window System installed. Not installing X Window System drivers. Installing modules ...done. Installing graphics libraries and desktop services components ...done. allusers@allusers-VirtualBox:/media/allusers/VBOXADDITIONS_4.2.10_84104$ I followed everything from the official VirtualBox manual on installing guest additions for linux I used some other commands such as: sudo apt-get install build-essential linux-headers-$(uname -r) dkms and: sudo apt-get install virtualbox-guest-x11 I rebooted after executing those command and it still wont work. It still says the kernel modules are missing and the window is not seamless. Any idea what could be the problem?

    Read the article

  • What to do with a HP Itanium box ?

    - by VivekRJ
    One of my customers has a HP Itanium (Integrity Rx6600 I think) box. They want to use it for a running our apps (Linux based). I was initially hoping to put a ESXi on it and load Ubu 10.10 but I was surprised IA64 is declining : Windows discontinued support since 2008 Ubuntu 10.04 is last of support CentOS also unsure VMWare ESXi not supported What are people doing ? Are people running Ubuntu 10.04 on Itanium succesfully ? Also FreeBSD 8.2 says supports it - are they going to keep with the platform ?

    Read the article

  • Doesn't boot after installation

    - by jchysk
    Downloaded Ubuntu 12.04.1-alternate-amd64 Installed to USB stick Integrity check fails on ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default but that seems to be a known bug where the file isn't included in the alternative 64-bit ISO and shouldn't affect installation. I ignore it and proceed on. For partitioning on 2 SSD Drives: Partition 300MB and 63GB on both RAID1 the 300MB and 63GBs Set the 300MB to EXT4 on /boot Encrypt the rest as MD1 and set it for LVM Create two volumes from MD1: 4GB swap and 59GB to / I go through the installation and get to the point where it says everything is ready and to take the media out so as to boot from the drives I receive the error "Error: No video mode activated." on startup I've read that this can be solved by running "cp /usr/share/grub/*.pf2 /boot/grub" and then updating grub but I can't get to a place where I can actually run this command. In rescue mode I can get to a shell from installer with /boot mounted to /target. So from there I can run "cp /cdrom/boot/grub/font.pf2 /target/grub/" but can't figure out a way to get it to update grub after that or know how what to change in manually updating the grub.cfg file. If I try other devices to mount the root filesystem I get the error "An error occurred while mounting the device you entered for your root file system". It just sits on the video mode error and doesn't progress further. Googling around it seems like people see the error briefly before it continues booting, not getting stuck on it the way I am which leads me to believe that error may be unrelated to Ubuntu not booting. So any ideas as to what I should try next or what needs to be done to install Ubuntu and get it to boot would be helpful.

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • Transferring existing files from ext3 to ZFS (on FreeBSD)

    - by peppergrower
    I use an old machine as a file server, for backups, and as a testbed for development. I currently have Debian installed, but I'm very interested in FreeBSD because of ZFS: I really, really like its file integrity features. Before I switch, however, I wanted to ask: what's the best way to migrate my ~400GB of files from the current filesystem (ext3) to ZFS? My number-one requirement is that the migration be absolutely reliable: I don't want to lose any data. (I'll be backing it up before doing this anyway, but still.) My secondary goal is speed: I'd rather not have this take overnight if it doesn't have to. Recommendations? Is FUSE for FreeBSD stable enough to use? What about FreeBSD's native read support for ext3? NFS, maybe? How have you done this?

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Top 25 security issues for developers of web sites

    - by BizTalk Visionary
    Sourced from: CWE This is a brief listing of the Top 25 items, using the general ranking. NOTE: 16 other weaknesses were considered for inclusion in the Top 25, but their general scores were not high enough. They are listed in the On the Cusp focus profile. Rank Score ID Name [1] 346 CWE-79 Failure to Preserve Web Page Structure ('Cross-site Scripting') [2] 330 CWE-89 Improper Sanitization of Special Elements used in an SQL Command ('SQL Injection') [3] 273 CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') [4] 261 CWE-352 Cross-Site Request Forgery (CSRF) [5] 219 CWE-285 Improper Access Control (Authorization) [6] 202 CWE-807 Reliance on Untrusted Inputs in a Security Decision [7] 197 CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') [8] 194 CWE-434 Unrestricted Upload of File with Dangerous Type [9] 188 CWE-78 Improper Sanitization of Special Elements used in an OS Command ('OS Command Injection') [10] 188 CWE-311 Missing Encryption of Sensitive Data [11] 176 CWE-798 Use of Hard-coded Credentials [12] 158 CWE-805 Buffer Access with Incorrect Length Value [13] 157 CWE-98 Improper Control of Filename for Include/Require Statement in PHP Program ('PHP File Inclusion') [14] 156 CWE-129 Improper Validation of Array Index [15] 155 CWE-754 Improper Check for Unusual or Exceptional Conditions [16] 154 CWE-209 Information Exposure Through an Error Message [17] 154 CWE-190 Integer Overflow or Wraparound [18] 153 CWE-131 Incorrect Calculation of Buffer Size [19] 147 CWE-306 Missing Authentication for Critical Function [20] 146 CWE-494 Download of Code Without Integrity Check [21] 145 CWE-732 Incorrect Permission Assignment for Critical Resource [22] 145 CWE-770 Allocation of Resources Without Limits or Throttling [23] 142 CWE-601 URL Redirection to Untrusted Site ('Open Redirect') [24] 141 CWE-327 Use of a Broken or Risky Cryptographic Algorithm [25] 138 CWE-362 Race Condition Cross-site scripting and SQL injection are the 1-2 punch of security weaknesses in 2010. Even when a software package doesn't primarily run on the web, there's a good chance that it has a web-based management interface or HTML-based output formats that allow cross-site scripting. For data-rich software applications, SQL injection is the means to steal the keys to the kingdom. The classic buffer overflow comes in third, while more complex buffer overflow variants are sprinkled in the rest of the Top 25.

    Read the article

  • ATI Radeon HD 6870 Driver fails to install default-policy.sh does not support version

    - by Rogue Coder
    I'm running Ubuntu 11.04 Beta, with everything updated completely. I'm using Ubuntu Classic, because Unity fails to run, supposedly because of my video card. The drivers for the Radeon HD 6870 series is apparently lacking, but I found a post stating the newest version has full support for Ubuntu Natty Narwhal. That post is slightly old, so i grabbed 11.3 for Ubuntu x86 off the ATI website. When I run the installation program, I receive the following error: > ./ati-driver-installer-11-3-x86.x86_64.run Created directory fglrx-install.uREFoO Verifying archive integrity... All good. Uncompressing ATI Catalyst(TM) Proprietary Driver-8.831.2......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Error: ./default_policy.sh does not support version default:v2:i686:lib::none:2.6.38-8-generic-pae:; make sure that the version is being correctly set by --iscurrentdistro Removing temporary directory: fglrx-install.uREFoO > I would love to get the latest ATI drivers working so that I can try out Unity!

    Read the article

  • Java Desktop Application For Network users

    - by Motasem Abu Aker
    I'm developing a desktop application using Java. My application will run in a network environment where multiple users will access the same database through the application. There will be basic CRUD opreations (Insert, Update, Delete, & select), which means there will be chances of deadlock, or two users trying to update the same record at same time. I'm using the following Java Swing for Clients (MVC). MySQL Server for database (InnODB). Java Web start. Now, MySQL is centralized on the network, and all of the clients connect to it. The Application for ERP Purpose. I searched the internet to find a very good solution to ensure data integrity & to make sure that when updating one record from one client, other clients are aware of it. I read about Socket-server-client & RESTful web services. I don't want to go web application & don't want to use any extra libraries. So how can I handle this scenario: If User A updates a record: Is there a way to update User B's screen with the new value? If user A starts updating a record, how can I prevent other users from attempting to update the same record?

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques

    The effects of Agile Programming can alter the five desirable properties of modeling tools and techniques as documented by Pfleeger. The agile methodology does promote human understanding and communication through the use of short iterative software development life cycles which forces stakeholders to review the project and adjust the project for any requirement changes.  Due to the consistent evaluations of a project and requirements, process are continually being refined, upgraded, and compared against other alternatives to ensure the best design delivered to the client. Due to the short repetitive development cycles, increased time is devoted to process management due to the fact that requirements and designs could be constantly changing. This requires additional forecasting, monitoring, and planning for each iteration. Because things can change so rapidly, automated guidance in performing process must be updated for each iteration because the environment and the available reusable process could change. In addition, the original guidance and suggestions for the project also need to be updated to account for these changes as well.   In essence the automation of process execution is supported by the agile methodology because during every iteration all processes must be tested, evaluated to ensure process integrity and compliance with the customer’s requirements. I do not think the agile approach diminishes modeling, in fact I think it increases the modeling because before the start of every development cycle, modeling must be checked for accuracy based on the changed requirements. So in essence the reduced time spent initially designing the models is in fact gained as the project completes every iteration of the project.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >