Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 79/361 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Has anybody tried to create a really big storage with ZFS and plain SAS controllers? [closed]

    - by Eccehomo
    I'm considering to build one with something like this: http://www.supermicro.com/products/chassis/4U/847/SC847E26-R1400U.cfm (a chasis with two dual port multipath expanders) http://www.supermicro.com/products/accessories/addon/AOC-SAS2LP-MV8.cfm (4 8-port plain SAS controllers, 2 for each backplane) and 36 Seagate 3Tb SAS drives (ST33000650SS) OS -- FreeBSD. And it's very interesting: How good expander sas backplanes and multipath configurations work with freebsd ? How to locate a specific drive in the bay? (literally -- how to blink an indicator on the drive in freebsd) How to detect a fail of a controller? Will it work together at all? I'm asking to share any experience.

    Read the article

  • USB 3.0 port with USB 3.0 device in Ubuntu 12.10

    - by fernando garcía
    When I try to connect a USB 3.0 device in Ubuntu 12.10 (ASUS K55VD, kernel 3.5.0-19-generic #30-Ubuntu SMP), the system says [ 74.747832] hub 3-0:1.0: unable to enumerate USB device on port 1 [ 74.931957] usb 4-1: new SuperSpeed USB device number 2 using xhci_hcd [ 74.949390] usb 4-1: New USB device found, idVendor=05e3, idProduct=0731 [ 74.949396] usb 4-1: New USB device strings: Mfr=0, Product=1, SerialNumber=2 [ 74.949400] usb 4-1: Product: USB Storage [ 74.949403] usb 4-1: SerialNumber: 0000000000000033 [ 75.033327] usbcore: registered new interface driver uas [ 75.038548] Initializing USB Mass Storage driver... [ 75.038651] scsi7 : usb-storage 4-1:1.0 [ 75.038700] usbcore: registered new interface driver usb-storage [ 75.038701] USB Mass Storage support registered. but it does not recognize the device, and the disks applications (gparted, nautilus) act as if nothing had been connected. I have checked other questions, but either they have no answers or they told about previous Ubuntu version with 3.0.x kernels. A USB 2.0 device will work in the USB 3.0 ports. A USB 3.0 device will work (at USB 2.0 speeds) in the USB 2.0 ports. The problem, as I wrote, is between USB 3.0 devices and USB 3.0 ports. I have my USB 3.0 ports configured without legacy support via the BIOS (the way they should be, I suppose). But I also have tried to configure them with XHCI Preboot mode disabled. Have any one solved a similar problem? Thanks in advance.

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Backing up Information Store - Recovering to Different Information Store / RSG

    - by Kip
    Hi All, I have a question on a situation, that hasn't yet arrisen but I wondered the possibilities and how we go about it. Currently we backup our Exchange 2003 Cluster with Backup exec. Currently it is set to backup the Microsoft Information Store on that server and all of the Mailbox Stores beneath it. We have previously used this in conjunction with a recovery storage group on the same server to recover lost mailboxes. However, due to space constrictions on that server ( a seperate issue that is being addressed in the very near future but outside of the scope of this question) we now don't have enough space on that server to do a recovery storage group type restore. Is it possible, to restore an information store, to a different server in the same administrative group (ie first)? By that I mean we have the following: Server1 | First Storage Group | Mailbox Store1/2/3 Could Mailbox Store 1 be restored to: Server2 | First Storage Group | Recovery Storage Group Both servers are under the same Administrative Group Currently for whatever reason ( mainly time) the mailboxes are not being backed up individually. Regards Kip

    Read the article

  • Running Jetty under Windows Azure Using RoleEntryPoint in a Worker Role

    - by Shawn Cicoria
    This post is built upon the work of Mario Kosmiskas and David C. Chou’s prior postings – from here: http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx As Mario points out in his post, when you need to have more control over the process that starts, it generally is better left to a RoleEntryPoint capability that as of now, requires the use of a CLR based assembly that is deployed as part of the package to Azure. There were things I liked especially about Mario’s post – specifically, the ability to pull down the JRE and Jetty runtimes at role startup and instantiate the process using the extracted bits.  The way Mario initialized the java process (and Jetty) was to take advantage of a role startup task configured as part of the service definition.  This is a great quick way to kick off processes or tasks prior to your role entry point.  However, if you need access to service configuration values or role events, that’s where RoleEntryPoint comes in.  For this PoC sample I moved the logic for retrieving the bits for the jre and jetty to the worker roles OnStart – in addition to moving the process kickoff to the OnStart method.  The Run method at this point is there to loop and just report the status of the java process. Beyond just making things more parameterized, both Mario’s and David’s articles still form the essence of the approach. The solution that accompanies this post provides all the necessary .NET based Visual Studio project.  In addition, you’ll need: 1. Jetty 7 runtime http://www.eclipse.org/jetty/downloads.php 2. JRE http://www.oracle.com/technetwork/java/javase/downloads/index.html Once you have these the first step is to create archives (zips) of the distributions.  For this PoC, the structure of the archive requires that the root of the archive looks as follows: JRE6.zip jetty---.zip Upload the contents to a storage container (block blob), and for this example I used /archives as the location.  The service configuration has several settings that allow, which is the advantage of using RoleEntryPoint, the ability to provide these things via native configuration support from Azure in a worker role. Storage Explorer You can use development storage for testing this out – the zipped version of the solution is configured for development storage.  When you’re ready to deploy, you update the two settings – 1 for diagnostics and the other for the storage container where the /archives are going to be stored. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="HostedJetty" osFamily="2" osVersion="*"> <Role name="JettyWorker"> <Instances count="1" /> <ConfigurationSettings> <!--<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="JettyArchive" value="jetty-distribution-7.3.0.v20110203b.zip" /> <Setting name="StartRole" value="true" /> <Setting name="BlobContainer" value="archives" /> <Setting name="JreArchive" value="jre6.zip" /> <!--<Setting name="StorageCredentials" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>"/>--> <Setting name="StorageCredentials" value="UseDevelopmentStorage=true" />   For interacting with Storage you can use several tools – one tool that I like is from the Windows Azure CAT team located here: http://appfabriccat.com/2011/02/exploring-windows-azure-storage-apis-by-building-a-storage-explorer-application/  and shown in the prior picture At runtime, during role initialization and startup, Azure will call into your RoleEntryPoint.  At that time the code will do a dynamic pull of the 2 archives and extract – using the Sharp Zip Lib <link> as Mario had demonstrated in his sample.  The only different here is the use of CLR code vs. PowerShell (which is really CLR, but that’s another discussion). At this point, once the 2 zips are extracted, the Role’s file system looks as follows: Worker Role approot From there, the OnStart method (which also does the download and unzip using a simple StorageHelper class) kicks off the Java path and now you have Java! Task Manager Jetty Sample Page A couple of things I’m working on to enhance this is to extract the jre and jetty bits not to the appRoot but to a resource location defined as part of the service definition. ServiceDefinition.csdef <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="HostedJetty" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="JettyWorker"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> <Endpoints> <InputEndpoint name="JettyPort" protocol="tcp" port="80" localPort="8080" /> </Endpoints> <LocalResources> <LocalStorage name="Archives" cleanOnRoleRecycle="false" sizeInMB="100" /> </LocalResources>   As the concept matures a bit, being able to update dynamically the content or jar files as part of a running java solution is something that is possible through continued enhancement of this simple model. The Visual Studio 2010 Solution is located here: HostingJavaSln_NDA.zip

    Read the article

  • C# or C++ game: many 16 color images loaded into RAM. Efficient solution?

    - by user560639
    I am in the planning stages of creating a fighting game and am unsure how to handle one issue relating to memory. Background info: - Still debating whether to use C# (XNA) or C++. We do not want to commit to either until we have explored how to solve this problem in both languages. - Using a max of 256MB RAM would be great if possible. - Two characters will be present at a time, and these characters can only change between battles. There is time to load/free memory between battles, but the game needs to run at a constant 60 drawn frames per second during combat. Each frame is 16.67ms - The total number of images per character is in the low hundreds. Each image is roughly 200x400 pixels. Only one image from each character will be displayed at any given moment. Uncompressed, each image takes roughly 300kb from my calculations; upwards of 100MB for a whole character. This is pushing too close to the 256MB limit given that memory will be needed for some other resources as well. Since each image can be made with a total of 16 colors. Theoretically I should be able to use 1/8th the space if I can take advantage of this. I've looked around but haven't found any word of native support for paletted images. (Storing each pixel using fewer bits that each map to a 32-bit RGBa color) I was considering making my own file format with 4 bits per pixel (and some extra palette info), loading all the images of this new format into RAM before battle, and then when drawing any specific image, decompress only that image into a raw image so it can be rendered properly. I don't know if it's realistic to perform so many assignment operations (appx 200x400 for each character = 160k) each frame. It sounds very hacky to me. Does anyone have advice on whether my solution sounds reasonable, and if there is perhaps a better one available? Thanks so much! (I also attempted to use an image with only 1 channel, then use a shader to perform a series of if statements to translate various values into other colors. Unfortunately, there were too many lines of code for the shader. It is also rather hacky and does not scale well.)

    Read the article

  • Efficient Map Overlays in and drawing Lines on Android Google Map...

    - by Ahsan
    Hi Friends, I want to do the following and am kind of stuck on these for a few days.... 1) I have used helloItemizedOverlay to add about 150 markers and it gets very very slow.....any idea what to do ? I was thinking about threads....(handler) ... 2) I was trying to draw poly lines ( I have encoded polylines, but have managed to decoded those) that move when I move the map..... 3) I was looking for some sort of a timer class that executes, say, every 1 minute or so.... 4) I was also looking for ways to clear the Google map from all the markers/lines etc.... Thanks a lot... :) - ahsan

    Read the article

  • Get XML from Server for Use on Windows Phone

    - by psheriff
    When working with mobile devices you always need to take into account bandwidth usage and power consumption. If you are constantly connecting to a server to retrieve data for an input screen, then you might think about moving some of that data down to the phone and cache the data on the phone. An example would be a static list of US State Codes that you are asking the user to select from. Since this is data that does not change very often, this is one set of data that would be great to cache on the phone. Since the Windows Phone does not have an embedded database, you can just use an XML string stored in Isolated Storage. Of course, then you need to figure out how to get data down to the phone. You can either ship it with the application, or connect and retrieve the data from your server one time and thereafter cache it and retrieve it from the cache. In this blog post you will see how to create a WCF service to retrieve data from a Product table in a database and send that data as XML to the phone and store it in Isolated Storage. You will then read that data from Isolated Storage using LINQ to XML and display it in a ListBox. Step 1: Create a Windows Phone Application The first step is to create a Windows Phone application called WP_GetXmlFromDataSet (or whatever you want to call it). On the MainPage.xaml add the following XAML within the “ContentPanel” grid: <StackPanel>  <Button Name="btnGetXml"          Content="Get XML"          Click="btnGetXml_Click" />  <Button Name="btnRead"          Content="Read XML"          IsEnabled="False"          Click="btnRead_Click" />  <ListBox Name="lstData"            Height="430"            ItemsSource="{Binding}"            DisplayMemberPath="ProductName" /></StackPanel> Now it is time to create the WCF Service Application that you will call to get the XML from a table in a SQL Server database. Step 2: Create a WCF Service Application Add a new project to your solution called WP_GetXmlFromDataSet.Services. Delete the IService1.* and Service1.* files and the App_Data folder, as you don’t generally need these items. Add a new WCF Service class called ProductService. In the IProductService class modify the void DoWork() method with the following code: [OperationContract]string GetProductXml(); Open the code behind in the ProductService.svc and create the GetProductXml() method. This method (shown below) will connect up to a database and retrieve data from a Product table. public string GetProductXml(){  string ret = string.Empty;  string sql = string.Empty;  SqlDataAdapter da;  DataSet ds = new DataSet();   sql = "SELECT ProductId, ProductName,";  sql += " IntroductionDate, Price";  sql += " FROM Product";   da = new SqlDataAdapter(sql,    ConfigurationManager.ConnectionStrings["Sandbox"].ConnectionString);   da.Fill(ds);   // Create Attribute based XML  foreach (DataColumn col in ds.Tables[0].Columns)  {    col.ColumnMapping = MappingType.Attribute;  }   ds.DataSetName = "Products";  ds.Tables[0].TableName = "Product";  ret = ds.GetXml();   return ret;} After retrieving the data from the Product table using a DataSet, you will want to set each column’s ColumnMapping property to Attribute. Using attribute based XML will make the data transferred across the wire a little smaller. You then set the DataSetName property to the top-level element name you want to assign to the XML. You then set the TableName property on the DataTable to the name you want each element to be in your XML. The last thing you need to do is to call the GetXml() method on the DataSet object which will return an XML string of the data in your DataSet object. This is the value that you will return from the service call. The XML that is returned from the above call looks like the following: <Products>  <Product ProductId="1"           ProductName="PDSA .NET Productivity Framework"           IntroductionDate="9/3/2010"           Price="5000" />  <Product ProductId="3"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="7/1/2010"           Price="599.00" />  ...  ...  ... </Products> The GetProductXml() method uses a connection string from the Web.Config file, so add a <connectionStrings> element to the Web.Config file in your WCF Service application. Modify the settings shown below as needed for your server and database name. <connectionStrings>  <add name="Sandbox"        connectionString="Server=Localhost;Database=Sandbox;                         Integrated Security=Yes"/></connectionStrings> The Product Table You will need a Product table that you can read data from. I used the following structure for my product table. Add any data you want to this table after you create it in your database. CREATE TABLE Product(  ProductId int PRIMARY KEY IDENTITY(1,1) NOT NULL,  ProductName varchar(50) NOT NULL,  IntroductionDate datetime NULL,  Price money NULL) Step 3: Connect to WCF Service from Windows Phone Application Back in your Windows Phone application you will now need to add a Service Reference to the WCF Service application you just created. Right-mouse click on the Windows Phone Project and choose Add Service Reference… from the context menu. Click on the Discover button. In the Namespace text box enter “ProductServiceRefrence”, then click the OK button. If you entered everything correctly, Visual Studio will generate some code that allows you to connect to your Product service. On the MainPage.xaml designer window double click on the Get XML button to generate the Click event procedure for this button. In the Click event procedure make a call to a GetXmlFromServer() method. This method will also need a “Completed” event procedure to be written since all communication with a WCF Service from Windows Phone must be asynchronous.  Write these two methods as follows: private const string KEY_NAME = "ProductData"; private void GetXmlFromServer(){  ProductServiceClient client = new ProductServiceClient();   client.GetProductXmlCompleted += new     EventHandler<GetProductXmlCompletedEventArgs>      (client_GetProductXmlCompleted);   client.GetProductXmlAsync();  client.CloseAsync();} void client_GetProductXmlCompleted(object sender,                                   GetProductXmlCompletedEventArgs e){  // Store XML data in Isolated Storage  IsolatedStorageSettings.ApplicationSettings[KEY_NAME] = e.Result;   btnRead.IsEnabled = true;} As you can see, this is a fairly standard call to a WCF Service. In the Completed event you get the Result from the event argument, which is the XML, and store it into Isolated Storage using the IsolatedStorageSettings.ApplicationSettings class. Notice the constant that I added to specify the name of the key. You will use this constant later to read the data from Isolated Storage. Step 4: Create a Product Class Even though you stored XML data into Isolated Storage when you read that data out you will want to convert each element in the XML file into an actual Product object. This means that you need to create a Product class in your Windows Phone application. Add a Product class to your project that looks like the code below: public class Product{  public string ProductName{ get; set; }  public int ProductId{ get; set; }  public DateTime IntroductionDate{ get; set; }  public decimal Price{ get; set; }} Step 5: Read Settings from Isolated Storage Now that you have the XML data stored in Isolated Storage, it is time to use it. Go back to the MainPage.xaml design view and double click on the Read XML button to generate the Click event procedure. From the Click event procedure call a method named ReadProductXml().Create this method as shown below: private void ReadProductXml(){  XElement xElem = null;   if (IsolatedStorageSettings.ApplicationSettings.Contains(KEY_NAME))  {    xElem = XElement.Parse(     IsolatedStorageSettings.ApplicationSettings[KEY_NAME].ToString());     // Create a list of Product objects    var products =         from prod in xElem.Descendants("Product")        orderby prod.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(prod.Attribute("ProductId").Value),          ProductName = prod.Attribute("ProductName").Value,          IntroductionDate =             Convert.ToDateTime(prod.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(prod.Attribute("Price").Value)        };     lstData.DataContext = products;  }} The ReadProductXml() method checks to make sure that the key name that you saved your XML as exists in Isolated Storage prior to trying to open it. If the key name exists, then you retrieve the value as a string. Use the XElement’s Parse method to convert the XML string to a XElement object. LINQ to XML is used to iterate over each element in the XElement object and create a new Product object from each attribute in your XML file. The LINQ to XML code also orders the XML data by the ProductName. After the LINQ to XML code runs you end up with an IEnumerable collection of Product objects in the variable named “products”. You assign this collection of product data to the DataContext of the ListBox you created in XAML. The DisplayMemberPath property of the ListBox is set to “ProductName” so it will now display the product name for each row in your products collection. Summary In this article you learned how to retrieve an XML string from a table in a database, return that string across a WCF Service and store it into Isolated Storage on your Windows Phone. You then used LINQ to XML to create a collection of Product objects from the data stored and display that data in a Windows Phone list box. This same technique can be used in Silverlight or WPF applications too. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get XML From Server for Use on Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Stack vs queue -based programming language efficiency [closed]

    - by Core Xii
    Suppose there are two programming languages; one where the only form of storage is one (preferred) or two (may be required for Turing-completeness) stacks, and another where the only form of storage is a single queue, with appropriate instructions in each to manipulate their respective storage to achieve Turing-completeness. Which one can more efficiently encode complex algorithms? Such that most given algorithms take less code to implement, less time to compute and less memory to do so. Also, how do they compare to a language with a traditional array (or unbounded tape, if you will) as storage?

    Read the article

  • iOS 5: View Details Of Used and Unused Space Of Your iCloud Account

    - by Gopinath
    Apple’s iCloud is an awesome free storage service that lets you store music, photos, apps, calendars, documents, and more on the Cloud. Also it wirelessly pushes them to all your iOS devices automatically. The free iCloud service offers everyone with 5 GB space to starts with and once you reach the cap you can subscribe to a premium account with few dollars of fee. If you would like to monitor iCloud usage details here steps to be followed on an iOS device 1. Tap on Settings app 2. Choose iCloud  from the list of available options 3. From the list of iCloud settings tap on Storage & Backup option 4. Under Storage section you will find the details of iCloud usage – Total Storage and Available Storage via tech-recipes This article titled,iOS 5: View Details Of Used and Unused Space Of Your iCloud Account, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • eSTEP Newsletter November 2012

    - by uwes
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4;  Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11;  Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud Handbook You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • PARTNER WEBCAST : Nimble SmartStack for Oracle with Cisco UCS (Nov 12)

    - by Zeynep Koch
    You are invited to the live webcast with Nimble Storage, Oracle and Cisco where we will talk about the new SmartStack solution from Nimble Storage that features Oracle Linux, Oracle VM and Cisco UCS products. When : Tuesday, November 12, 2013, 11:00 AM Pacific Time Panelists: Michele Resta, Director of Linux and Virtualization Alliances, Oracle John McAbel, Senior Product Manager, Cisco Ibby Rahmani, Solutions Marketing, Nimble Storage SmartStack™solutions provide pre-validated reference architectures that speed deployments and minimize risk. All SmartStack solutions incorporate Cisco UCS as the compute and network infrastructure. In this webinar, you will learn how Nimble Storage SmartStack with Oracle and Cisco provides a converged infrastructure for Oracle Database environments with Oracle Linux and Oracle VM. SmartStack, built on best-of-breed components, delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or Oracle Real Application Clusters (RAC) on multiple nodes.  Register today 

    Read the article

  • Gnome trash on cifs mount random behaviour

    - by BobPenguin
    Hello everyone, I'm seeing some weird behaviour in Ubuntu 10.04. I have a cifs mount that's mounted by fstab as follows: //192.168.1.1/share /media/storage cifs_netdev,username=guest,password="",uid=1000,guid=100,file_mode=0777,dir_mode=0777 0 0 I can mount the share using: sudo mount -a my user can then access it, create and delete files. The deleted files appear in the gnome trash applet. A folder /media/storage/.Trash-1000 is created automatically. When I log out, restart the machine and log in, the cifs share is mounted but the trash applet is empty. If I unmount the share with sudo umount /media/share then remount with sudo mount -a the trash applet displays the contents of the .trash-1000 folder! It gets stranger...sometimes after umount then mount -a the trash is STILL empty, but another round of umount then mount -a fixes it. It seems like the trash applet is "forgetting" to scan the /media/storage mount point and is not always finding the .trash-1000 folder at that mount point. Even when the trash applet is not displaying any trash from /media/storage/.trash-1000 I can still delete things from the /media/storage and they're moved to the .trash-1000 folder. So I conclude there's a bug in the trash applet...anyone know how to fix it?

    Read the article

  • Download the ZFSSA Objection Handling document (PDF)

    - by swalker
    View and download the new ZFS Storage Appliance objection handling document from the Oracle HW Technical Resource Centre here. This document aims to address the most common objections encountered when positioning the ZFS Storage Appliance disk systems in production environments. It will help you to be more successful in establishing the undeniable benefits of the Oracle ZFS Storage Appliance in your customers´ IT environments. If you do not already have an account to access the Oracle Hardware Technical Resource Centre, please click here and follow the instructions to register.

    Read the article

  • C# - Which is more efficient and thread safe? static or instant classes?

    - by Soni Ali
    Consider the following two scenarios: //Data Contract public class MyValue { } Scenario 1: Using a static helper class. public class Broker { private string[] _userRoles; public Broker(string[] userRoles) { this._userRoles = userRoles; } public MyValue[] GetValues() { return BrokerHelper.GetValues(this._userRoles); } } static class BrokerHelper { static Dictionary<string, MyValue> _values = new Dictionary<string, MyValue>(); public static MyValue[] GetValues(string[] rolesAllowed) { return FilterForRoles(_values, rolesAllowed); } } Scenario 2: Using an instance class. public class Broker { private BrokerService _service; public Broker(params string[] userRoles) { this._service = new BrokerService(userRoles); } public MyValue[] GetValues() { return _service.GetValues(); } } class BrokerService { private Dictionary<string, MyValue> _values; private string[] _userRoles; public BrokerService(string[] userRoles) { this._userRoles = userRoles; this._values = new Dictionary<string, MyValue>(); } public MyValue[] GetValues() { return FilterForRoles(_values, _userRoles); } } Which of the [Broker] scenarios will scale best if used in a web environment with about 100 different roles and over a thousand users. NOTE: Feel free to sugest any alternative approach.

    Read the article

  • Efficient algorithm for creating an ideal distribution of groups into containers?

    - by Inshim
    I have groups of students that need to be allocated into classrooms of a fixed capacity (say, 100 chairs in each). Each group must only be allocated to a single classroom, even if it is larger than the capacity (ie there can be an overflow, with students standing up) I need an algorithm to make the allocations with minimum overflows and under-capacity classrooms. A naive algorithm to do this allocation is horrendously slow when having ~200 groups, with a distribution of about half of them being under 20% of the classroom size. Any ideas where I can find at least some good starting point for making this algorithm lightning fast? Thanks!

    Read the article

  • Is the SAN dying???

    - by RickHeiges
    Is the SAN dying? The reason that I ask this question is that MSFT has unleashed technologies this year that point in that direction Always ON Availability Groups shuns shared storage Windows 2012 has Storage Replication Technology that does not require a SAN Windows 2012 has Hyper-V Replica Technology that does not require a SAN PDW v2 continues to reinforce the approach to avoid shared storage I'm not saying that SAN technology does not have its place or does not have benefits inherent to the beast....(read more)

    Read the article

  • Is there a more efficient way to get the number of search results from a google query?

    - by highone
    Right now I am using this code: string url = "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=hey&esrch=FT1"; string source = getPageSource(url); string[] stringSeparators = new string[] { "<b>", "</b>" }; string[] b = source.Split(stringSeparators, StringSplitOptions.None); bool isResultNum = false; foreach (string s in b) { if (isResultNum) { MessageBox.Show(s.Replace(",", "")); return; } if (s.Contains(" of about ")) { isResultNum = true; } } Unfortunately it is very slow, is there a better way to do it? Also is it legal to query google like this? From the answer in this question it didn't sound like it was http://stackoverflow.com/questions/903747/how-to-download-google-search-results

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >