Search Results

Search found 5416 results on 217 pages for 'storage'.

Page 61/217 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Windows XP Does Not Follow CNAME Shares

    - by user49349
    I am supporting a mix of Windows XP Pro and Windows 7 desktops in my Active Directory network, and I am having an odd issue with XP and CNAME records. Say I have a record in my DNS for a server with an A name of something like STORAGE.company.local and give it a CNAME of NAS.company.local. I can go onto an XP and 7 computer, and ping NAS and it will automatically resolve to STORAGE.company.local. If I am on Windows 7 and go to run and enter \\STORAGE or \\NAS, it will go to that server in Explorer. If I do the same in XP, STORAGE will work but NAS will not. It just times out Is there some setting buried in XP to make this work properly?

    Read the article

  • usb_modeswitch not switching

    - by deniz
    After I upgraded from kernel 2.6.18 to 3.5.3 modeswitch started not to work for me. Although lsusb shows my usb modem, usb_modeswitch does not switch it. My system information is like below. I ran lsusb, dmesg, usb-devices and usb_modeswitch their output is like below. usb_modeswitch instead of switching my modem it says "No devices in default mode found. Nothing to do. Bye.". Can you offer a solution? Kernel: Linux 3.5.3 usb_modeswitch: 1.2.3-1 usb_modeswitch-data: 20120120-1 usbutils: 006-1 libusb: 1.0.8-0.1 root@localhost$ lsusb Bus 002 Device 029: ID 12d1:1446 Huawei Technologies Co., Ltd. root@localhost$ dmesg [70112.477080] usb 2-1.4: new high-speed USB device number 30 using ehci_hcd [70112.567757] scsi49 : usb-storage 2-1.4:1.0 [70112.567842] scsi50 : usb-storage 2-1.4:1.1 [70113.571433] scsi 49:0:0:0: CD-ROM HUAWEI Mass Storage 2.31 PQ: 0 ANSI: 2 [70113.572304] scsi 50:0:0:0: Direct-Access HUAWEI TF CARD Storage PQ: 0 ANSI: 2 [70113.574169] sr0: scsi-1 drive [70113.574223] sr 49:0:0:0: Attached scsi CD-ROM sr0 [70113.574250] sr 49:0:0:0: Attached scsi generic sg1 type 5 [70113.574350] sd 50:0:0:0: Attached scsi generic sg2 type 0 [70113.577173] sd 50:0:0:0: [sdb] Attached SCSI removable disk root@localhost$ usb-devices T: Bus=02 Lev=02 Prnt=02 Port=03 Cnt=01 Dev#= 30 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=12d1 ProdID=1446 Rev=00.00 S: Manufacturer=Huawei Technologies S: Product=HUAWEI Mobile C: #Ifs= 2 Cfg#= 1 Atr=c0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage I: If#= 1 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage root@localhost$ cat /etc/usb_modeswitch.d/12d1\:1446 # Huawei, newer modems TargetVendor= 0x12d1 TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" MessageContent="55534243123456780000000000000011062000000100000000000000000000" root@localhost$ usb_modeswitch -c /etc/usb_modeswitch.d/12d1:1446 -v 12d1 -p 1446 -W * usb_modeswitch: handle USB devices with multiple modes * Version 1.2.3 (C) Josua Dietze 2012 * Based on libusb0 (0.1.12 and above) ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1446 TargetVendor= 0x12d1 TargetProduct= not set TargetClass= not set TargetProductList="1001,1406,140b,140c,1412,141b,1433,1436,14ac,1506" DetachStorageOnly=0 HuaweiMode=0 SierraMode=0 SonyMode=0 QisdaMode=0 GCTMode=0 KobilMode=0 SequansMode=0 MobileActionMode=0 CiscoMode=0 MessageEndpoint= not set MessageContent="55534243123456780000000000000011062000000100000000000000000000" NeedResponse=0 ResponseEndpoint= not set InquireDevice enabled (default) Success check disabled System integration mode disabled usb_set_debug: Setting debugging level to 15 (on) usb_os_find_busses: Skipping non bus directory devices usb_os_find_busses: Skipping non bus directory drivers usb_os_find_busses: Skipping non bus directory uevent usb_os_find_busses: Skipping non bus directory drivers_probe usb_os_find_busses: Skipping non bus directory drivers_autoprobe Looking for target devices ... No devices in target mode or class found Looking for default devices ... No devices in default mode found. Nothing to do. Bye. Thanks in advance.

    Read the article

  • LUNS access issue in ESX4 Cluster server

    - by rmustafa
    HI, I've created volumes in equallogic in PS 6000 XV(having 2 member which is in 1 pool), checked & those volumes can be easily detected my ISCSI software in windows. But the problem with ESX , not able to see the assigned disk on ESX server, I can explain what I've done: 1.Created Cluster with enabled HA & DRS 2.Added 3 ESX4 HOST 3.Added VMkernel & configured in all 3 ESX4, enabled vmotion & FT on the same adapter. 4.went to iSCSI storage adapter properties, enabled iSCSI 5.Trying to discover the available storage with the controller IP on dynamic discovery, but not able to see the assigned storage Note: the same volume is accessed to windows that means there is no issue from storage , am I right ???? Note: I wanted to mount the same volume in all 3 ESX host. Please suggest .... Thanks & Regards, Rashid Mustafa

    Read the article

  • Zero-channel RAID for High Performance MySQL Server (IBM ServeRAID 8k) : Any Experience/Recommendation?

    - by prs563
    We are getting this IBM rack mount server and it has this IBM ServeRAID8k storage controller with Zero-Channel RAID and 256MB battery backed cache. It can support RAID 10 which we need for our high performance MySQL server which will have 4 x 15000K RPM 300GB SAS HDD. This is mission-critical and we want as much bandwidth and performance. Is this a good card or should we replace with another IBM RAID card? IBM ServeRAID 8k SAS Controller option provides 256 MB of battery backed 533 MHz DDR2 standard power memory in a fixed mounting arrangement. The device attaches directly to IBM planar which can provide full RAID capability. Manufacturer IBM Manufacturer Part # 25R8064 Cost Central Item # 10025907 Product Description IBM ServeRAID 8k SAS - Storage controller (zero-channel RAID) - RAID 0, 1, 5, 6, 10, 1E Device Type Storage controller (zero-channel RAID) - plug-in module Buffer Size 256 MB Supported Devices Disk array (RAID) Max Storage Devices Qty 8 RAID Level RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 1E Manufacturer Warranty 1 year warranty

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Suggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssure

    - by Anon
    We have 2 Hyper-V hosts at present running 1 virtual server that was converted from a physical box running all roles. My plan is to split the roles over various virtual machines, upgrading to the latest software versions as I go, and use the backup server as a standby in case the main server fails. AppAssure backup software has a feature called Virtual Standby, so the VHD's can be ready to be fired up on the backup server if necessary. Off-site backups will be done via external USB drive for now. I'm just seeking some input/suggestions into how I'm planning to split the roles out amongst various virtual servers. Also, I'm curious how to setup the storage on the servers. We do not have any NAS's, SAN'S or any budget for this. What would the best RAID level be to use? I'm thinking either RAID6 (which is currently used) however I'm concerned about the write speeds, or RAID10 but again I'm worried that I can only lose 1 drive (from the same mirror) as opposed to any 2 with RAID6. I realise I have a hot swap for this, but what if a further drive fails during a rebuild? Is the write penalty of RAID6 worth the extra reliability over RAID10? Or will it be too slow with all the roles I am planning, therefore RAID10 is my only real option? The reason for the needed redundancy is I am the only technician and I'm not always on-site. Options I've considered: 1) 5 drives in RAID6 set, 200gb for host OS, rest for VM storage. 1 drive for hot swap - this is how it is currently setup 2) 4 drives in RAID10 set, 200gb for host OS, rest for VM storage. 2 drives for hot swap 3) 4 drives in RAID10 set for VM storage, 2 drives in RAID1 set for host OS. No drives for hot swap - While this is probably the best option with the amount of drives I have, I don't like the idea of having no hot swap 4) 3 drives in RAID6 set for VM storage, 2 drives in RAID1 set for host OS. 1 drive for hot swap All options give us enough storage capacity for our files, etc. We don't have any budget for extra drives or extra hot swap HD chassis for the servers. We have about 70 clients and about 150 users. MAIN SERVER Intel Xeon 5520 @ 2.27 GHz (2 processors) 16GB RAM 6 x 1TB Seagate Barracuda ES.2 Enterprise SATA drives Intel SRCSATAWB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: DC01 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM DC02 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM Member Server - DHCP server, File server, Print server - 1GB RAM SCCM Member Server - 4GB RAM Third Party Software Member Server - A/V server, Ticketing software, etc - 4GB RAM Exchange 2007 - 4GB RAM - however we are probably migrating to a hosted solution, therefore freeing up resources BACKUP SERVER Intel Xeon E5410 @ 2.33GHz (2 processors) 16GB RAM 6 x 2TB WD RE4 SATA drives Intel SRCSASRB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: AppAssure backup software - 8GB RAM

    Read the article

  • Exchange 2007 | Mailbox DB Size 180GB

    - by rihatum
    Hi All, I have a Exchange 2007 SP1 server running on Windows 2008 6 HD Drives in a RAID-1 OS, DB, Logs on separate RAID-1 Disks Size of the Mailbox Database is 183GB and increasing We only have First Storage Group and Second Storage Group There is no more space on the server to install new Physical Disks and create a Storage Group Q - Can I resize the RAID-1 Partition where the DB is ? Q - Any other suggestions as to how I can decrease the Mailbox DB Size ? Will be grateful for your suggestions on this. Kind Regards

    Read the article

  • Suggestions on providing HA access to an external (fibre) RAID subsystem

    - by user145198
    We are looking at upgrading our storage capacity with an external RAID subsystem that has redundant (2) fibre controllers, each controller has 4 x 8 Gbps fibre ports. I would like to make access to this storage system occur via HA Linux. Ideally I would connect 2 fibre ports from each controller into each Linux server, and then export either NFS or iSCSI via a 10 Gbe interface. I have seen plenty of references to DRBD, however all of those references tend to use block storage that is solely attached to each machine, rather than having a shared block storage device, so I am unsure if DRBD could (or should) be used in this case. Ideas?

    Read the article

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • Restoring file properties but not the complete files, from backup

    - by Jon
    While copying data from my old storage on a Linux computer to the new (linux-based) NAS, I accidentially failed with getting the properties (most important: the modify dates) along to the new location. I also continued to use/modify the files at the new location and hence, cannot just copy it all over again. What I would like to do is a diff between files in the old vs. the new storage, and for those being identical, restore the properties from Linux storage to the NAS storage files. Is there a clever way such as a script or a tool to do this? I could either run it on the Linux box or in worst case from a remote Windows computer. Grateful for any suggestions. /Jon

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • Partner Webcast – More out of Database Appliance with DB Options - 13 September 2012

    - by Thanos
    The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g —in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and networking that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. Feature Benefit Simplifies deployment, maintenance, and support of high-availability database workloads Saves significant time and effort throughout the database administration lifecycle An engineered system of software, server, storage, and networking High availability for a wide range of custom and packaged OLTP and data warehousing application databases Simple one-button Installation, full-stack integrated patching and diagnostics Reduces planned and unplanned downtime by automatically monitoring and logging service requests with Oracle Support Built using the world’s #1 database Protects databases from server and storage failures with Oracle Real Application Clusters and Automatic Storage Management Unique Pay-As-You-Grow software licensing Reduces cost with flexibility to adjust your software spend as your business grows without the need for any hardware upgrades Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. This webcast is repeated once again for your benefit. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! Oracle Database Appliance is available for purchase at the Oracle Store under Engineered Systems. For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • CloudBerry Online Backup 1.5 for Windows Home Server

    - by The Geek
    Overview CloudBerry Online Backup version 1.5 is a front end application for Amazon S3 storage for backing up your Windows Home Server data. It makes backing up your essential data to Amazon S3 an easy process in the event the disaster strikes. Installation You install the Cloudberry Addin as you do for any addins for Windows Home Server. On a PC on your network, browse to the shared folders on your server and open the Add-Ins folder and copy over WHS_CloudBerryOnlineBackupSetup_v1.5.0.81S3o.msi (link below), then close out of the folder. Next launch the Windows Home Server Console, click Settings, then Add-Ins. Click on the Available tab and click the Install button. It installs very quickly, and when you get the Installation Succeeded dialog click OK. You will lose connection through the Console, just click OK, then reconnect. After reconnecting, you’ll see CloudBerry Backup has been installed, and you can begin using it. You can setup a backup plan right away or find out what’s new with version 1.5. Amazon S3 Account If you don’t already have an Amazon S3 account, you’ll be prompted to create a new one. Click on the Create an account hyperlink, which takes you to the Amazon S3 page where you can sign up. After reviewing the functionality of Amazon S3, click on the Sign Up for Amazon S3 button. Enter in your contact information and accept the Amazon Web Services Customer Agreement. You’re then shown their pricing for storage plans. The amount of storage space you use will depend on your needs. It’s relatively cheap for smaller amounts of data. Just keep in mind the more data you store and download, the more S3 is going to cost. Note: Amazon S3 is introducing Reduced Redundancy Storage which will lower the cost of the data stored on S3. CloudBerry 1.5 will support this new feature. You can find out more about this new pricing structure. Note: Keep in mind that after you first sign up for an Amazon S3 account, it can take up to 24 hours to be authorized. In fact, you may want to sign up for the S3 account before installing the Add-In. After you sign up for your S3 Account, you’ll be given access credentials which you can enter in and create a Storage Bucket name. Features & Use CloudBerry is wizard driven, straight-forward and easy to use. Here we take a look at creating a backup plan. To begin, click on the Setup Backup Plan button to kick off the wizard. Select your backup mode based on the amount of features you want. In our example we’re going to select Advanced Mode as it offers more features than Simple Mode. Select your backup storage account or create a new one. You can select a default account by checking Use currently selected account as default. Now you can go through and select the files and folders you want to backup from your home server. Check the box Show physical drives to get more of a selection of files and folders. This also allows you to backup files from your data drive as well. It has full support for drive extenders so you can backup your shares as well. The cool thing about Cloudberry is it allows you to drill down specific files and folders unlike other WHS backup utilities. Next you can use advanced filters to specify files and/or folders to skip if you want. There are compression and encryption options as well. This will save storage space, bandwidth, and keep your data secure. Purge Options allow you to customize options for getting rid of older files. You can also select the option to delete files from the S3 service that have been deleted locally. Be careful with this option however, as you won’t be able to restore files if you delete them locally. You have some nice scheduling options from running backups manually, specific date and time, or recurring daily, weekly or monthly. Receive email notifications in all cases or when a backup fails. This is a good option so you know if things were successful or something failed, and you need to back it up manually. Email notifications… Give your plan a name… Then if the summary page looks good you can continue, or still go back at this point if something doesn’t look correct and needs adjusting. That’s it! You’re ready to go, and you have an option to start your first backup right away. After you’ve created a backup plan, you can go in and edit, delete, view history, or restore files. Restoring Files using CloudBerry To restore data from your backups kick off the Restore Wizard and select the backup to restore from. You can select the last backup, a specific point in time, or manually browse through the files. Browse through the directory and select the files you need to restore. Choose the destination to restore the files to. You can select from the original location, a specific location, to overwrite existing files, or set the location as the default for future restores. If the files are encrypted, enter in the correct passwords. If the summary looks good, click on Next to start the restore process. You’ll be shown a progress bar at the bottom of the screen while the files are restored. After the process has completed, close out of the Restore Wizard. In this example we restored a couple of music files to the desktop of Windows Home Server… But as shown above you can save them to the original location, other network locations, or WHS shared folders. This can make it a lot easier to keep track of files you’ve restored. You can also access different options for CloudBerry by clicking Settings in WHS Console then CloudBerry Backup. Here you can set up a new storage account, check for updates, app options, Diagnostics, and send feedback. Under Options there are several settings you can tweak to get the best experience for your WHS backups. CloudBerry Web Interface Another nice feature is the CloudBerry Web Interface so you can access your data from anywhere you have an Internet connection. To check it out in WHS Console, click on the Backup Web Interface link…you’ll probably want to bookmark the link in your favorite browser. Note: This feature is still in beta and at the time of this review, the Web Interface wasn’t up and running so we weren’t able to test it out. Performance The Cloudberry app works very well through the Windows Home Server Console. The amount of time it takes to backup or restore your data will depend on the speed of your Internet connection and size of the files. In our tests, backing up 1GB of data to the Amazon S3 account took around an hour, but we were running it on a DSL with limited upload speeds so your mileage will vary. Product Support In our experience, the team at CloudBerry offered great support in a timely manner when contacting them. You can fill out a help request through a form on their website and they also have a community forum. Conclusion We were very pleased with CloudBerry Online Backup for WHS. It’s wizard driven interface makes it extremely easy to use, and offers comprehensive backup choices for your Amazon S3 account. CloudBerry will only backup files that have been modified, so if files haven’t been changed, they won’t be backed up again.They offer a free 15 day trial and is $29.99 after that for a full license. Once you buy the app you own it, and charges to your S3 account will vary depending on the amount of data you upload. If you’re looking for an effective and easy to use front end application to backup your Windows Home Server data to your Amazon S3 account, CloudBerry is a recommended affordable choice. Download CloudBerry for Windows Home Server Sign Up For Amazon S3 Account Rating Installation: 9 Ease of Use: 8 Features: 8 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerBackup Windows Home Server Folders to an External Hard DriveBackup Your Windows Home Server Off-Site with Asus WebstorageRemove a Network Computer from Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • New Hands-On Labs For Oracle VM

    - by rickramsey
    I just spent some time walking through the labs that Christophe Pauliat and Olivier Canonge prepared to help you become familiar with Oracle VM. They are terrific. We will offer them for the first time at Oracle Open World. Because they require some pre-work and 16Gigs of memory, we are supplying the laptops for the participants. Lab 1: Deploying Infrastructure as a Service with Oracle VM Session ID: HOL9558 Tuesday October 2nd, 2012 10:15am – 11:15am Marriott Marquis - Salon 14/15 Planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. Storage capacity planning, LUN creation, network bandwidth planning, and best practices for designing and streamlining the environment so that it's easy to manage. Lab 2: Virtualize and Deploy Oracle Applications Using Oracle VM Templates Session ID: HOL9559 Tuesday October 2nd, 2012 11:45am – 12:45pm Marriott Marquis - Salon 14/15 How to deploy Oracle applications in minutes with Oracle VM Templates. Step-by-step lab proctored by field-experienced engineers and product experts. Covers: Find out what Oracle VM Templates are and how they work Deploy an actual Oracle VM Template for an Oracle Application Plan your deployment to streamline on going updates and upgrades Lab 3: x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL 9870 Wednesday, 3 Oct, 2012 5:00 PM - 6:00 PM Marriott Marquis - Salon 14/15 This hands-on lab will demonstrate what Oracle’s enterprise cloud infrastructure for x86 can do, and how it works with Oracle VM 3.x. It covers: How to create VMs How to migrate VMs How to deploy Oracle applications quickly and easily with Oracle VM Templates How to use the Storage Connect plug-in for the Sun ZFS Storage Appliance Additional Virtualization Resources for Sysadmins Technical articles about virtualization Other resources about Oracle virtualization technologies More information about Oracle Open World. - Rick Website Newsletter Facebook Twitter

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • Key announcements from Oracle Openworld - Video series

    - by Javier Puerta
    If you missed Oracle Openworld now you have the opportunity to watch a series of four 15-min webcasts with the key announcements, explained by EMEA key executives. Oracle OpenWorld I, OMN - Part 1 OPENWORLD I: Oracle's Cloud. interview with Alan HartwellGaye Hudson and Steve Walker, EMEA Corporate Communications take a look at Oracle's announcements leading up to Oracle Open World and talk to Alan Hartwell, VP Sales, Engineered Solutions, Exadata, Exalogic about Oracle's cloud offering. Oracle Open World II , OMN Part 2 OPENWORLD II: Engineered Systems with Alan HartwellGaye Hudson, VP Corporate Communications, EMEA talks to Alan Hartwell, VP Sales, Engineered Solutions, Exadata, Exalogic about Oracle's Engineered Systems, parallel hardware and software; Exalytics, Big Data Appliance & Enterprise Manager. Oracle OpenWorld III, OMN Part 3 OPENWORLD III: HW with John Abel, Storage with Luc Gheysens Gaye Hudson and Steve Walker talk to John Abel, Chief Technology Architect, Oracle Server and Storage, EMEA about SPARC SuperCluster and T4; and to Luc Gheysens, Senior Director, Storage Sales Specialist, EMEA about ZFS Storage and Pillar Axiom 600. Oracle OpenWorld IV, OMN Part 4 OPENWORLD IV: Oracle Fusion Applications with Noel ColoeGaye Hudson, VP Corporate Communications, EMEA talks to Noel Coloe, Head of Western Europe Applications Sales Development about Oracle Fusion Applications, a new paradigm in Enterprise applications.

    Read the article

  • What are Information Centers?

    - by user12244613
    Information Centers are similar to product pages in the Oracle Sun System Handbook Many customers like the Oracle Sun System Handbook concept of a home page with all the product attributes, troubleshooting etc. access from a single home page. This concept is now available for a range of Oracle Solaris, Systems, and Storage products. The Information Center for each product covers areas such as: Overview, Hot Topics, Patching and Maintenance. The Information Center pages are dynamically generated each night to ensure the latest content is available to you. Here are the top Solaris, Systems, and Storage Information Centers: Oracle Explorer Data Collector Oracle Solaris 10 Live Upgrade Oracle Solaris 11 Booting Information Center Oracle Solaris 11 Desktop and Graphics Information Center Oracle Solaris 11 Image Packaging System (IPS) Information Center Oracle Solaris 11 Installation Information Center Oracle Solaris 11 Product Information Center Oracle Solaris 11 Security Information Center Oracle Solaris 11 System Administration Information Center Oracle Solaris 11 Zones Information Center Oracle Solaris Crash Analysis Tool(SCAT) - Information Center Oracle Solaris Cluster Information Center Oracle Solaris Internet Protocol Multipathing (IPMP) Information Center Oracle Solaris Live Upgrade Information Center Oracle Solaris ZFS Information Center Oracle Solaris Zones Information Center CMT T1000/T2000 and Netra T2000 CMT T5120/T5120/T5140/T5220/T5240/T5440 Systems M3000/M4000/M5000/M8000/M9000-32/M9000-64 Management and Diagnostic Tools for Oracle Sun Systems Netra CT410/810 and Netra CT900 Network-Attached Storage (NAS) Oracle Explorer Data Collector Oracle VM Server for SPARC (LDoms) Pillar Axiom 600 SL3000 Tape Library Sun Disk Storage Patching and Updates Sun Fire 3800/4800/4810/6800/E2900/E4900/E6900/V1280 - Netra 1280/1290 Sun Fire 12K/15K/E20K/E25K Sun Fire X4270 M2 Server Sun x86 Servers T3 and T4 Systems Tape Domain Firmware V210/V240/V440/V215/V245/V445 Servers VSM (VTSS/VLE/VTCS)

    Read the article

  • Product Update Bulletin: Oracle Solaris Cluster October 2013

    - by uwes
    Announcing new qualifications and general news for the Oracle Solaris Cluster product. Hardware Qualifications Sun Server X4-2 and X4-2L servers, Sun Blade X4-2B server module with Oracle Solaris Cluster 3.3 Sun Storage 16 Gb Fibre Channel ExpressModule Universal HBA, Emulex Oracle Dual Port QDR InfiniBand Adapter M3 Software Qualifications Oracle Database 12c Real Application Cluster with Oracle Solaris Cluster 4.1 Oracle Database 11.2.0.4 single instance and RAC with Oracle Solaris Cluster 4.1 Oracle VM server for SPARC 3.1 SAP Netweaver with new kernel versions ZFS Storage Appliance Kit version 2011.1.7.0 and 2013.1.0.0 Application monitoring in Oracle VM for SPARC failover guest domain Storage Partner Update Oracle Solaris Cluster 3.3 3/13 with the HDS Enterprise Storage arrays EMC SRDF for Oracle database 12c RAC in Oracle Solaris Cluster 4.1 geo cluster configuration Oracle Solaris Cluster References Korea Enterprise Data, HDFC Securities, Dealis Fund Operations Web Updates New blog entry: Oracle Solaris 10 Brand Zone cluster Solaris Application Engineering website now includes Oracle Solaris Cluster application support information Please read the Oracle Solaris Cluster Product Update Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) _____________________________________________________________________ For More Information Go To:Oracle.com Oracle Solaris Cluster page Oracle Technology Network Oracle Solaris Cluster pageOracle Solaris Cluster mos communityPartner web Oracle Solaris Cluster pageOracle Solaris Cluster Blog Solaris.us.oracle.com page

    Read the article

  • SQL SERVER – What is the Maximum Relational Database Size Supported by Single Instance?

    - by Pinal Dave
    I often get asked following question? “How much data SQL Server can handle?” Every single time when I get this question – I ask back following question - “How much data your storage system can handle?” The reason I ask this question back is because in reality for enterprise systems the limitation of storage is no more an issue. The Matter of the fact most of the database is now a days limited by the size of the storage system. SQL Server is enterprise system and it is very mature product. Even though if you still want to know what is the actual limit here is the answer. SQL Server 2008R2, 2012 and 2014 have maximum capacity of 524 PB (Petabyte) in the Enterprise, BI and Standard edition. SQL Server Express has a limitation of 10 GB due to its nature. I guess, now when you look at my question it will make sense that it is all depending on the size of your storage system. I personally believe at this point of time 524 PB is quite a huge data, but we never know after 10 years when we read this blog post, we all may think what was I thinking actually. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How can I make an unmounted / unmountable NTFS disk not show up in the nautilus devices area?

    - by Dennis
    I have an idea that my /etc/fstab is a real mish-mash and I don't remember how it got that way, first of all it looks like this UUID=9EB80807B807DD21 /media/Storage ntfs-3g users 0 0 UUID=a60397fd-964a-45b1-ad35-53c8a4bee010 / ext4 defaults 0 1 UUID=1764825d-b8ba-4620-b3b0-e979b6f4f5c4 swap swap sw 0 0 UUID=255DA1E406E29DBC /media/sda2 ntfs-3g defaults 0 0 UUID=2CCCF161CCF1262C /mnt/sda1 ntfs-3g umask=000 0 0 /dev/fd0 /media/floppy0 vfat noauto 0 0 I started with an old XP install on disk /dev/sda that I don't use anymore but didn't want to delete, so I shrunk the XP partition, added a NTFS partition that would be common to both systems (Labeled it "Common" in XP), then installed Lucid on an extended ext4 partition. On this disk of course the ext4 system partition comes up as /, the go between partition auto-mounts on /media/sda1 but shows up in Nautilus as COMMOM, while the XP system disk does not show up in Nautilus, but I can get to it by navigating to /mnt/sda1. A second hard drive (/dev/sdb) that I stuck in was already formatted NTFS with a bunch of stuff and labeled "Storage". It auto-mounts to /media/Storage but another un-mounted disk also shows up in the Nautilus device area called Storage but it can't be mounted (Here and in the "Places" are the only times it appears) I would primarily like this non-existant (or already mounted depending on how you look at it) disk to not show up, but I wouldn't mind an explanation of why one labeled partition auto-mounts to a /media mount point but shows up by label, one does not show up as mounted at all but mounts to a /mnt mount point and is there for navigation, and one is mounted to a directory of the same name as the label. I would love to have some consistancy / direction on what is proper in this circumstance. No doubt I caused this with the fstab but I really don't remember what my rational was if I edited it manually

    Read the article

  • ?Oracle DB 11gR2 ??????????????????/????????????????!

    - by Yuichi.Hayashi
    ?????????????????????????????40~60%????????????? ??????????????????????????????????????????????????????????????????????????????????... ????????????????????????????????????????TCO(Total Cost of Ownership)???????????? ??????????????????????????????????????????????????????????????????????????? ???????1?1???????????????????????????????????1??????????????????????????????????????·????????????? ??????????????????·???·????????????????TCO????????????? ????????????????Grid(????)????????????????????????????·???? = Oracle Real Application Clusters(RAC)???????·???? = Oracle Automatic Storage Management(ASM)????????????????????????????????????? Oracle Database???????11g R2?????????????????????????/???????????????????????(????????????????)?????????????????????????????! SCAN Single Client Access Name(SCAN)??Oracle Real Application Clusters(RAC)11g R2??????? SCAN??????????????????????????????????????????????????RAC?????????????????????????·????????????????????SCAN?????????????????????VIP?????????????RAC????????????????????????????????!???????????????? ???????????????????????????????????????SCAN?????????????????????????????TCO?????????????????(????????????)???????????????????????????????!????????????? SCAN?????????????????????? ??????Oracle Database 11gR2 Real Application Clusters(?????????) ??????Oracle Real Application Clusters 11g Release 2 SCAN??? ACFS ASM Cluster File System(ACFS)??Automatic Storage Management(ASM)11g R2??????? ASM??S.A.M.E.(Stripe And Mirror Everything)????????????????????????????????????????????????????????·???????·??????????????????10g????????????????·??????????·???????????????ASM????????????·??????????????????????????????????????????·?????????????????????????????????????????????? 11g R2??????ACFS?????????????????(????????????????????????????????????????????????????????)????ASM???????????????????????????????????????????????????????·??????????????ACFS????! · ??????????????????? · ????????????/???? · ??????????????????????(?????)??? · ????????????? ?2???????????????????????????? · ???????????? ??????????????????????·??????????·?????????????????????!???????????????????????????????????????? ACFS??????????????????? ??????Oracle Database 11gR2 Automatic Storage Management ??????Oracle Database 11g Release 2 Automatic Storage Management???????????????? ??????·????? ??????·???????Oracle Database Resource Manager(????·?????)11g R2??????? ????·????????????·???????????????????????????????Oracle Database???Oracle RAC????????????????????????????????????????????????????????????????????????????????????????????????????1????????????? ????????????·?????????????????????????????????????????????????????????????????? CPU ???????????????????????????????????????????????????·??????????????????????????? 11g R2??????????·???????????????? CPU_COUNT ?????????????? CPU ???????????????????????????·??????? CPU ?????????????????????????????????????????????? ????????·???????????????????????????????????????????Oracle ??????????????????????·??????? CPU ??????????????????????????????·???????????????!???????????????? ??????·???????????????????????? ??????Oracle Database 11gR2 ????????????? ?????????? ? ??????????????????????????????????!? ? ???????????????????????????????????!?

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >