Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 202/553 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • nikto probe warning messages

    - by julio
    Hi-- I have a pretty standard VPS running Ubuntu 8.1, Apache 2.2, PHP 5 etc. -- standard Lamp stack. I am using suhosin and have tried my best to plug the obvious stuff, since I'm the only user-- there's no SSH access except via pubkey on a non-standard port, there's no root access by SSH, no FTP server running, iptables is set to discard anything outside of basically port 80 or my SSH port (there's no mail server or anything else). However, I've still been compromised (not badly as far as I can tell) probably by a SQL injection. I've locked down the SQL user (there's only one outside of root, and he's got limited priv, no file etc.) So I ran nikto to see what I'm doing wrong, and there's a list of things I've never seen, and can't find using "find" or any other method I'm aware of. See below: + /autologon.html?10514: Remotely Anywhere 5.10.415 is vulnerable to XSS attacks that can lead to cookie theft or privilege escalation. This is typically found on port 2000. + /servlet/webacc?User.html=noexist: Netware web access may reveal full path of the web server. Apply vendor patch or upgrade. + OSVDB-35878: /modules.php?name=Members_List&letter='%20OR%20pass%20LIKE%20'a%25'/*: PHP Nuke module allows user names and passwords to be viewed. + OSVDB-3092: /sitemap.xml: This gives a nice listing of the site content. + OSVDB-12184: /index.php?=PHPB8B5F2A0-3C92-11d3-A3A9-4C7B08C10000: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F36-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-3092: /administrator/: This might be interesting... + OSVDB-3092: /Agent/: This might be interesting... + OSVDB-3092: /includes/: This might be interesting... + OSVDB-3092: /logs/: This might be interesting... + OSVDB-3092: /tmp/: This might be interesting... + ERROR: /servlet/Counter returned an error: error reading HTTP response + OSVDB-3268: /icons/: Directory indexing is enabled: /icons + OSVDB-3268: /images/: Directory indexing is enabled: /images + OSVDB-3299: /forumscalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /forumzcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /htforumcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbulletincalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-6659: /kCKAowoWuZkKCUPH7Mr675ILd9hFg1lnyc1tWUuEbkYkFCpCdEnCKkkd9L0bY34tIf9l6t2owkUp9nI5PIDmQzMokDbp71QFTZGxdnZhTUIzxVrQhVgwmPYsMK7g34DURzeiy3nyd4ezX5NtUozTGqMkxDrLheQmx4dDYlRx0vKaX41JX40GEMf21TKWxHAZSUxjgXUnIlKav58GZQ5LNAwSAn13l0w<font%20size=50>DEFACED<!--//--: MyWebServer 1.0.2 is vulnerable to HTML injection. Upgrade to a later version. I understand about the trace and index, but what about the vbulletin and autologin? I've searched, and I can't find any files like that on the server. I have no idea about the "MyWebServer" stuff, the PHP Nuke, or the Netware/servlet stuff-- there's nothing really on the server except a pretty standard Joomla site (updated to the latest version). Any help with these messages and/or what I'm doing wrong is very much appreciated.

    Read the article

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Adding Blog to Your Orchard Website

    - by hajan
    One of the common features in today’s content management systems is to provide you the ability to create your own blog in your website. Also, having a blog is one of the very often needed features for various types of websites. Out of the box, Orchard gives you this, so you can create your own blog in your Orchard website on a pretty easy way. Besides the fact that you can very easily create your own blog, Orchard also gives you some extra features in relation with the support of blogging, such as connecting third-party client applications (e.g. Windows Live Writer) to your blog, so that you can publish blog posts remotely. You can already find all the information provided in this blog post on the http://orchardproject.net website, however I thought it would be nice to make summary in one blog post. I assume you have already installed Orchard and you are already familiar with its environment and administration dashboard. If you haven’t, please read this blog post first.   CREATE YOUR BLOG First of all, go to Orchard Administration Dashboard and click on Blog in the left menu Once you are there, you will see the following screen   Fill the form with all needed data, as in the following example and click Save Right after, you should see the following screen Click New post, and add your first post. After that, go to Homepage (click Your Site in the top-left corner) and you should see the Blog link in your menu After clicking on Blog, you will be directed to the following page Once you click on My First Post, you will see that your blog already supports commenting ability (you can enable/disable this from Administration dashboard in your blog settings) Added comment Adding new comment Submit comment So, with following these steps, you have already setup your blog in your Orchard website.   CONNECT YOUR BLOG WITH WINDOWS LIVE WRITER Since many bloggers prepare their blog posts using third-party client applications, like Windows Live Writer, its very useful if your blog engine has the ability to work with these third-party applications and enable them to make remote posting and publishing. The client applications use XmlRpc interface in order to have the ability to manage and publish the blogs remotely. What is great about Orchard is that it gives you out of the box the XmlRpc and Remote Publishing modules. What you only need to do is to enable these features from the Modules in your Orchard Administration Dashboard. So, lets go through the steps of enabling and making your previously created blog able to work with third-party client applications for blogging. 1. Go to Administration Dashboard and click the Modules After clicking the Modules, you will see the following page: As you can see, you already have Remote Blog Publishing and XmlRpc features for Content Publishing, but both are disabled by default. So, if you click Enable only on Remote Blog Publishing, you will see both of them enabled at once since they are dependent features. After you click Enable, if everything is Ok, the following message should be displayed: So, now we have the featured enabled and ready... The next thing you need to do is to open Windows Live Writer. First, open Windows Live Writer and in your Blog Accounts, click on Add blog account In the next window, chose Other services After that, click on your Blog link in the Orchard website and copy the URL, my URL (on localhost development server) is: http://localhost:8191/blog Then, add your login credentials you use to login in Orchard and click Next. After that, if you have setup everything successfully, the Windows Live Writer will do the rest Once it finishes, you will have window where you can specify the name of your blog you have just connected your Windows Live Writer to... Then... you are done. You can see Windows Live Writer has detected the Orchard theme I am using After you finish with the blog post, click on Publish and refresh the Blog page in your Orchard website You see, we have the blog post directly posted from Windows Live Writer to my Orchard Blog. I hope this was useful blog post. Regards, Hajan Reference and other useful posts: Build incredible content-driven websites using Orchard CMS Create blog on your site with Orchard CMS Blogging using Windows Live Writer in your Orchard CMS Blog Orchard Website

    Read the article

  • jQuery - discrepency between classname and selectors

    - by Ciel
    I have the following code that I wrote, which I personally found to be pretty nice. It takes a <ul> and it drops down the contents when clicked. But I am having a disconnect here in comprehension, and one I had to do what I feel is a 'dirty hack' to solve. The problem is that I do not want the class `'sidebar-dropdown-open' to be so 'hardwired' in the plugin. However I discovered that there is a very stark difference between... $('.sidebar-dropdown-open') and 'sidebar-dropdown-open and even '.sidebar-dropdown-open. I 'solved' this problem by including two different 'parameters' in my plugin, but I was wondering if someone might give me some insight as to how I could perform this better, and why this was behaving this way. wiring (document load) $(document).ready(function () { $('[data-role="sidebar-dropdown"]').drawer({ open: 'sidebar-dropdown-open', css: '.sidebar-dropdown-open' }); }); html <ul> <li class=" dropdown" data-role="sidebar-dropdown"> <a href="pages/.." class="remote">Link Text</a> <ul class="sub-menu light sidebar-dropdown-menu"> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> </ul> </li> </ul> javascript (function ($) { $.fn.drawer = function (options) { // Create some defaults, extending them with any options that were provided var settings = $.extend({ open: 'open', css: '.open' }, options); return this.each(function () { $(this).on('click', function (e) { // slide up all open dropdown menus $(settings.css).not($(this)).each(function () { $(this).removeClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideUp('fast'); $menu.removeClass('active'); }); // mark this menu as open $(this).addClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideDown(100); $menu.addClass('active'); e.preventDefault(); e.stopPropagation(); }).on("mouseleave", function () { $(this).children(".dropdown-menu").hide().delay(300); }); }) }; })(jQuery); I have tried using settings.open and demanding that it just be a className (.open), etc. - but that does not seem to work. It seems to get ignored by the removeClass function.

    Read the article

  • SD Card reader not working on Sony Vaio

    - by TessellatingHeckler
    This laptop (Sony Vaio VGN-Z31MN/B PCG-6z2m) has been installed with Windows 7 64 bit, all the drivers from Sony's VAIO site are installed, and everything in Device Manager both (a) has a driver and (b) shows as working, no exclamation marks or warnings. "Hide empty drives" in Folder options is disabled so the card reader appears, but will not read the card ("please insert a disk in drive O:"). Previously, when the laptop had Windows XP on it, it could read the same card. Also, Windows update suggested driver ("SD Card Reader") doesn't work, Ricoh own drivers install properly but do the same behaviour. Other 3rd party driver suggestions from forums (Acer and Texas-Instruments FlashMedia) do not seem to install properly. I would post the PCI id if I had it, but it was just showing up as rimsptsk\diskricohmemorystickstorage (while it had the Ricoh Driver installed). Edit: If there are any lower level diagnostic utlities which might shed more light on it I'd welcome hearing of them. Anything which might show get it to put troubleshooting logs in the event log or identify chipsets or whatever... Update: Device details are: SD\VID_03&OID_5344&PID_SD04G&REV_8.0\5&4617BC3&0&0 : SD Memory Card PCI\VEN_8086&DEV_2934&SUBSYS_9025104D&REV_03\3&21436425&0&E8: Intel(R) ICH9 Family USB Universal Host Controller - 2934 PCI\VEN_1180&DEV_0476&SUBSYS_9025104D&REV_BA\4&1BD7BFCD&0&20F0: Ricoh R/RL/5C476(II) or Compatible CardBus Controller RIMSPTSK\DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00\MS0001: SD Storage Card PCI\VEN_1180&DEV_0592&SUBSYS_9025104D&REV_11\4&1BD7BFCD&0&24F0: Ricoh Memory Stick Host Controller WPDBUSENUMROOT\UMB\2&37C186B&1&STORAGE#VOLUME#_??_RIMSPTSK#DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00#MS0001#: O:\ STORAGE\VOLUME\{C82A81B8-5A4F-11E0-AACC-806E6F6E6963}#0000000000100000: Generic volume PCI\VEN_1180&DEV_0822&SUBSYS_9025104D&REV_21\4&1BD7BFCD&0&22F0: SDA Standard Compliant SD Host Controller ROOT\LEGACY_FVEVOL\0000 : Bitlocker Drive Encryption Filter Driver PCI\VEN_1180&DEV_0832&SUBSYS_9025104D&REV_04\4&1BD7BFCD&0&21F0: Ricoh 1394 OHCI Compliant Host Controller Now going to search for drivers for that.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • zfs setup question

    - by Staale
    Currently I have a linux storage box and server with 4x750gb harddrives in raid-5 with ext3. I have ordered 3x1.5tb disks to upgrade this. Here is my planned upgrade: Backup: Format the 1.5 tb disks Copy all data from the raid-5 disks to the 1.5tb disks Destroy the raid-5 array. New setup: Create a VirtualBox system and install Nexenta (OpenSolaris + ubuntu) on it. Create a zfs pool with zraid1 with the 4 750gb disks. Copy from 1.5tb disks to the virtualbox zfs pool Format the 1.5tb disks. Replace 3 off the 750gb disks with 1.5tb disks. Reuse the 750gb disks elsewhere. The reason I wish to use one 750gb disk is since I can't grow the disk count in a raidz array, and this gives me the option off replacing that disk later for an extra 750gb storage. Would the ZFS performance be good running through virtualbox? Or will the performance overhead be too large? Will I get 1.5tb+1.5tb+750gb storage on the zraid? Or just 750gbx3 until all disks are 1.5tb?

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

  • How to get the permissions right for /dev/raw1394

    - by Mark0978
    I recently upgraded one of my ubuntu machines to Karmic and I'm having trouble getting the permissions of /dev/raw1394 set to 0666. They only thing this machine is used for is recording audio from a firepod which uses /dev/raw1394 via jackd and there are no other FireWire devices connected, so security around this device is not really an issue. If I run as root, everything works as expected, but I have some folks that run the recorder that I don't want to have root access. However, I can't figure out which lines setup the perms I've tied this: /etc/udev/permissions.d/raw1394.rules:raw1394:root:root:0666 And I have this setup (default install) /lib/udev/rules.d/75-persistent-net-generator.rules:SUBSYSTEMS=="ieee1394", ENV{COMMENT}="Firewire device $attr{host_id})" /lib/udev/rules.d/75-cd-aliases-generator.rules:# the "path" of usb/ieee1394 devices changes frequently, use "id" /lib/udev/rules.d/75-cd-aliases-generator.rules:ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb|ieee1394", ENV{ID_CDROM}=="?*", ENV{GENERATED}!="?*", \ /lib/udev/rules.d/60-persistent-storage-tape.rules:KERNEL=="st*[0-9]|nst*[0-9]", ATTRS{ieee1394_id}=="?*", ENV{ID_SERIAL}="$attr{ieee1394_id}", ENV{ID_BUS}="ieee1394" /lib/udev/rules.d/50-udev-default.rules:# FireWire (deprecated dv1394 and video1394 drivers) /lib/udev/rules.d/50-udev-default.rules:KERNEL=="dv1394-[0-9]*", NAME="dv1394/%n", GROUP="video" /lib/udev/rules.d/50-udev-default.rules:KERNEL=="video1394-[0-9]*", NAME="video1394/%n", GROUP="video" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n" And I find these lines in /var/log/syslog Apr 30 09:11:30 record kernel: [ 3.284010] ieee1394: Node added: ID:BUS[0-00:1023] GUID[000a9200c7062266] Apr 30 09:11:30 record kernel: [ 3.284195] ieee1394: Host added: ID:BUS[0-01:1023] GUID[00d0035600a97b9f] Apr 30 09:11:30 record kernel: [ 18.372791] ieee1394: raw1394: /dev/raw1394 device initialized What I can't figure out, is which line actually creates that raw1394 device in the first place. How do you get /dev/raw1394 to have permissions 0666?

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • 0 connected nodes in datastax opscenter

    - by gansbrest
    Installed opscenterd on the separate node outside of the cluster, but within firewall ( aws security group ). Tested all possible ports between agents and opcenter server. No errors in the log.. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Attempting to load all persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted scheduled job descriptions 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: OpsCenter starting up. 2013-10-30 01:07:23+0000 [] INFO: Finished starting new cluster services for FC_Cluster 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.34.10.185 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.32.37.251 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.82.226.252 is version u'3.2.2' The most interesting part that I can see some data in the opscenter UI, when I stop agents, there is no data displayed, when I start - it show up again, but at the same time it shows 0 connected nodes. Storage capacity is even funnier - 3 of 0 nodes.. Any ideas why that could be happening?

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • Is it possible/practical to install and run Linux on a USB flash drive?

    - by Graeme Donaldson
    I'm going to replace my old 2004 vintage desktop PC soon and I have an idea of what I want to do, I'm just not sure if it's possible or realistic. In the time since I built the old PC it has slowly become less used as a PC and more as a file server, so I figured I'd build a small file server which could also function as a router/DHCP/DNS/whatever box. The idea is to base it on an Atom system. I have my eye on the Intel D510MO for the moment. This supports 2 SATA disks, and I'd prefer to dedicate those to data storage. I'd like to install Ubuntu Server or maybe Debian on a 8/16GB USB flash drive. I have seen plenty of tutorials on how to perform an installation from a USB drive, but I can't seem to find any info on actually booting and running the OS from USB flash. Is this even possible? Is it practical? This box will mostly be used for: Making backups of mine and my wife's notebooks via LAN. Will use SMB or NFS for this. Digital media storage, which will be accessed by a Mede8er box with no storage of its own. I will most likely use NFS for this.

    Read the article

  • Windows XP Setup Fails to Recognize USB Floppy after formatting AHCI disk

    - by Strahn
    I am attempting to install Windows XP Professional x64 onto a HP EliteBook 8540w. I have downloaded both the latest Intel Rapid Storage Technology drivers and the Intel Storage Matrix drivers that are listed on HPs website and copied the drivers over to a floppy disk (two separate floppies, one for each version of the drivers.) Booting to my WinXP Pro x64 install CD, I go through the F6 process, load the driver and am able to see my HDD, delete, create and format partitions on it. When I go to continue the install, after checking the disk, the system asks me to enter the disk labeled "Intel Rapid Storage Technology" and press enter to continue. Nothing happens at this point when I press enter. This happens if I use the latest drivers or the older drivers. We have created a slipstreamed install CD using nLite that has the AHCI drivers integrated, which installs fine. However, we have identified a number of issues with the system that I believe are side-effects of using nLite for the slipstreaming and I am attempting to verify that. I have researched this issue and found a few examples of others having the same problem, but no solution. The USB floppy is a Lacie branded floppy, connecting it to a working XP workstation shows it to be the Y-E Data USB floppy drive that is supposedly 100% compatible with XP per MS KB 916196.

    Read the article

  • How do I set up Grub properly to quad-boot Windows, Mac OS X, Linux, and FreeBSD?

    - by Joe
    Grub has gone completely insane on me. My quad-boot system was working great up until I upgraded Ubuntu to 12.04. Since Ubuntu overwrote the Grub stuff I had to repair it with my Mac OS X and FreeBSD entries. After this, trying to boot Mac OS X gave me the error "couldn't open file" and FreeBSD gave the error "no such partition". Windows and Ubuntu worked fine. So I tried repairing again because I figured something must've gone wrong in the install process. Then only Ubuntu would boot. Trying to boot Windows would give me the error "no argument specified". I tried repairing Grub once again, since I seemed to be getting different results each time. This time, Ubuntu no longer appeared in the Grub menu, and the errors for the other OSes were the same. So I booted into the Ubuntu 12.04 live CD and ran Boot-Repair with recommended settings. Now Grub is completely skipped and Windows boots up. I have absolutely no idea what is going on or why I get different results every time I reinstall Grub. Here is how my partitions are set up: sda1 - Storage drive, sdb1 - Windows, sdb2 - Mac OS X, sdb3 - FreeBSD, sdb4 - Extended, sdb5 - Ubuntu, sdb6 - Shared storage, sdb7 - Shared Storage, Here's my grub.cfg file: grub.cfg

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • How do I use VS2010 One-Click Publish (MsDeploy) to deploy remotely from the command line?

    - by David
    On the remote web server I have installed the remote service http://x.x.x.x/MsDeployAgentService. If I use the Web Application Project's Publish command in VS2010 I can successfully publish to this remote web server and update a specific IIS website. What I want to do now is execute this capability from the command line. I am guessing it is two steps. First build the web application project using the relevant build configuration: msbuild "C:\MyApplication\MyWebApplication.csproj" /T:Package /P:Configuration=Release Then issue the MsDeploy command to have it publish/sync with the remove IIS server: msdeploy -verb:sync -source:package="C:\MyApplication\obj\Release\Package\MyWebApplication.zip" -dest:contentPath="My Production Website", computerName=http://x.x.x.x/MsDeployAgentService, username=adminuser,password=adminpassword Unfortunately I get an the error: Error: (10/05/2010 3:52:02 PM) An error occurred when the request was processed on the remote computer. Error: Source (sitemanifest) and destination (contentPath) are not compatible for the given operation. Error count: 1. I have tried a number of different combinations for destination provider but no joy :( Has anyone managed to replicate VS2010 Web Application Project "One Click" Publish from the command line?

    Read the article

  • mercurial .hgrc notify hook

    - by Eeyore
    Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit. .hgrc [paths] default = ssh://www.domain.com/repo/hg [ui] username = intern <[email protected]> ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [extensions] hgext.notify = [hooks] changegroup.notify = python:hgext.notify.hook incoming.notify = python:hgext.notify.hook [email] from = [email protected] [smtp] host = smtp.gmail.com username = [email protected] password = sure port = 587 tls = true [web] baseurl = http://dev/... [notify] sources = serve push pull bundle test = False config = /path/to/subscription/file template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n maxdiff = 300 Error Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg! , error code: -1 running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg!

    Read the article

  • git push heroku master gives error ssh: connect to host heroku.com port 22: Connection refused

    - by user1476508
    I'm trying to run the heroku-django tutorial (using ubuntu 12.04) and it seems for some reason i cant push into heroku. here is what happens: yeinhorn@ubuntu:~/hellodjango$ git init Reinitialized existing Git repository in /home/yeinhorn/hellodjango/.git/ yeinhorn@ubuntu:~/hellodjango$ git add . yeinhorn@ubuntu:~/hellodjango$ git commit -m "my first commit" On branch master nothing to commit (working directory clean) yeinhorn@ubuntu:~/hellodjango$ heroku create Creating high-dusk-6308... done, stack is cedar http://high-dusk-6308.herokuapp.com/ | [email protected]:high-dusk-6308.git ! New default stack: Cedar. To use Bamboo, run heroku create -s bamboo. yeinhorn@ubuntu:~/hellodjango$ git remote -v heroku [email protected]:blazing-dusk-8587.git (fetch) heroku [email protected]:blazing-dusk-8587.git (push) yeinhorn@ubuntu:~/hellodjango$ git push heroku master ssh: connect to host heroku.com port 22: Connection refused fatal: The remote end hung up unexpectedly yeinhorn@ubuntu:~/hellodjango$ git push -f heroku ssh: connect to host heroku.com port 22: Connection refused fatal: The remote end hung up unexpectedly also when i run $telnet heroku.com 22 i get Trying 50.19.85.132... Trying 50.19.85.154... Trying 50.19.85.156... telnet: Unable to connect to remote host: Connection refused any ideas?

    Read the article

  • How to Select Items in Dropdown in Selenium

    - by Marcus Gladir
    Firstly, I have been trying to get the dropdown from this web page: http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl This is the code I have: import urllib2 from bs4 import BeautifulSoup import re from pprint import pprint import sys from selenium import common from selenium import webdriver import selenium.webdriver.support.ui as ui from boto.s3.key import Key import requests url = 'http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl' element_xpath = '//*[@id="Component1"]' driver = webdriver.PhantomJS() driver.get(url) element = driver.find_element_by_xpath(element_xpath) element_xpath = '/option[@value="02"]' all_options = element.find_elements_by_tag_name("option") for option in all_options: print("Value is: %s" % option.get_attribute("value")) option.click() source = driver.page_source.encode('utf-8', 'ignore') driver.quit() source = str(source) soup = BeautifulSoup(source, 'html.parser') print soup What prints out is this: Traceback (most recent call last): File "../../../../test.py", line 58, in <module> Value is: XX main() File "../../../../test.py", line 46, in main option.click() File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 54, in click self._execute(Command.CLICK_ELEMENT) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 228, in _execute return self._parent.execute(command, params) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 165, in execute self.error_handler.check_response(response) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 158, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotVisibleException: Message: u'{"errorMessage":"Element is not currently visible and may not be manipulated","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"81","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:51413","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\\"sessionId\\": \\"30e4fd50-f0e4-11e3-8685-6983e831d856\\", \\"id\\": \\":wdc:1402434863875\\"}","url":"/click","urlParsed":{"anchor":"","query":"","file":"click","directory":"/","path":"/click","relative":"/click","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/click","queryKey":{},"chunks":["click"]},"urlOriginal":"/session/30e4fd50-f0e4-11e3-8685-6983e831d856/element/%3Awdc%3A1402434863875/click"}}' ; Screenshot: available via screen And the weirdest most infuriating bit of it all is that sometimes it actually all works out. I have no clue what's going on here.

    Read the article

  • sp_addlinkedserver on sql server 2005 giving problem

    - by Jit
    I am trying to create a link server of a remote database(both the servers are SQL serve2005). I am able to connect that remote server from my SQL Server management studio. I used the following syntax to create it. EXEC sp_addlinkedserver @server = N'LINKSQL2005', @srvproduct = N'', @provider = N'SQLNCLI', @provstr = N'SERVER=IP Address of remote server ;User ID=XXXXXX;Password=***' I have provided the IP addressntax. and user name and password in the above syntax. The link server is getting created. But when I try to execute a query on it I get the error below. Query Used. select * from LINKSQL2005.<DBName>.dbo.<TableName> OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Communication link failure". Msg 10054, Level 16, State 1, Line 0 TCP Provider: An existing connection was forcibly closed by the remote host. Msg 18456, Level 14, State 1, Line 0 Login failed for user 'sa'. OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Invalid connection string attribute". Pls help me, where am I making mistake.

    Read the article

  • asp.net doesn't render Sys.WebForms.PageRequestManager._initialize code

    - by ajitatif
    i'm using the ASP.NET 2.0 Ajax Extensions on a web site. as always, everything is fine on local but the remote web site does not use ajax calls. my local server has the ASP.NET Ajax extensions installed but the remote one doesn't. i know that i should be able to use the Ajax extensions without installing them. so in turn, i added the extensions' .dll among the web site's references but still no luck. after my further investigation, i found out that local and remote pages have exactly the same HTML code rendered, except that the local (working) one has these lines //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ctl00$ContentPlaceHolder1$ScriptManager1', document.getElementById('aspnetForm')); Sys.WebForms.PageRequestManager.getInstance()._updateControls(['tctl00$ContentPlaceHolder1$updReportArgs','tctl00$ContentPlaceHolder1$updReport'], ['ctl00$ContentPlaceHolder1$chkTumu','ctl00$ContentPlaceHolder1$btnGetir'], [], 90); //]]> obviously, these are the lines of code that make callbacks possible. the question is why doesn't asp.net render these lines? what could be missing? by the way, the ScriptResource.axd and WebResource.axd doesn't give a 404 or anything, i can see through their js codes via Firebug. and one more thing: i'm unsure if it is related or not, but there are client-side asp.net validators on the page whose js code are not rendered either. again, those work fine locally. for further investigation you can see the remote site here : http://www.ajitatif.com/subdomains/nazer/Raporlar/danismanbasarim.aspx

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >