Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 701/1338 | < Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >

  • How should I configure TRIM Support for LVM logical volumes?

    - by Zack Perry
    I am setting up a notebook for software demo purpose. The machine has a Intel Core i7 CPU, 8GB RAM, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit desktop. As it is, the SSD is configured to have a single volume group, with /boot, /swap, and / all in their respective logical volumes. They collectively consume 30GB space. I plan to use the remaining for logical volumes for KVM guests, all run Ubuntu 12.04 Server I would like to ensure that the SSD is utilized optimally. Although on this site, there are some great info about setting up TRIM support for file system setups that do not involve LVM, I have not found explicit guide regarding my planned setup. I did found this page which talks about adding issue_discards in /etc/lvm/lvm.conf. But in said file on my machine, I didn't find the cited content. I double-checked man lvm.conf(5), didn't see any mentioning of this option either. Thus, I'm not sure what to do. Furthermore, even say adding the option is the right thing to do, should I in my machine's /etc/fstab still add mount options such as noatime etc? Any tips, pointers, and/or further guidance are greatly appreciated.

    Read the article

  • Configuring Request-Reply in JMSAdapter

    - by [email protected]
    Request-Reply is a new feature in 11g JMSAdapter that helps you achieve the following:Allows you to combine Request and Reply in a single step. In the prior releases of the Oracle SOA Suite, you would require to configure two distinct adapters. Performs automatic correlation without you needing to configure BPEL "correlation sets". This would work seamlessly in Mediator and BPMN as well.In order to configure the JMSAdapter Request-Reply, please follow these steps:1) Drag and drop a JMSAdapter onto the "External References" swim lane in your composite editor. 2) Enter default values for the first few screens in the JMS Adapter wizard till you hit the screen where the wizard prompts you to enter the operation name. Select "Request-Reply" as the "Operation Type" and Asynchronous as "Operation Name".3) Select the Request and Reply queues in the following screens of the wizard. The message will be en-queued in the "Request" queue and the reply will be returned in the "Reply" queue. The reason I have used such a selector is that the back-end system that reads from the request queue and generates the response in the response queue actually generates more than one response and hence I must use a filter to exclude the unwanted responses.4) Select the message schema for request as well as response. 5) Add an <invoke> activity in BPEL corresponding to the JMS Adapter partner link. Please note that I am setting an additional header as my third-party application requires this.6) Add a <receive> activity just after the <invoke> and select the "Reply" operation. Please make sure that the "Create Instance" option is unchecked.Your completed BPEL process will something like this:

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • MySQL and User level logging

    - by Adraen
    I have been looking at logging only certain users activity in MySQL. I found that the logging could be enabled or disabled for all users but one of the service using the db does a lot of queries and therefore I would like to only log specific users. Google told me that a flag can be SET to enable disable logging, however, I cannot modify the service DB connection code and asking every single user to enable logging before they do anything might not be as reliable as I want. So, do you know if there is any way to log only a set of users queries ? Thanks !

    Read the article

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • ASP.NET MVC Controllers & Actions In Regards To URLs And SEO

    - by user1066133
    The general idea is that if I were to create an MVC site, simple pages such as the contact and about pages will be placed under the Home Controller. So my URL would look like http://www.mysite.com/home/contact, and http://www.mysite.com/home/about. The above works just fine, but I really don't like the idea of having the "home" portion in the URL. So what negatives would there be if I decided to make a controller name of Contact and About and just added a single Index action so that way the URL would be simplified to http://www.mysite.com/contact and http://www.mysite.com/about. This method looks cleaner. Do any of you do the same or something similar? I've been trying to work on SEO for an escort service site I've developed and when you search for the females the link looks like http://www.mysite.com/escorts/female-escorts, and like-wise for males. I'm wondering if I should remove the Escorts Controller and just create a Female_Escorts Controller with an Index Action only so it comes out like the above as http://www.mysite.com/female-escorts.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • How do I access a shared folder using credentials other than the ones I logged in with?

    - by George Sealy
    I have a lab full of Windows 7 machines, and a shared login (user360) that all my students use. I also have a shared folder that they can all have read/write access to (for moving files around easily). My problem is that I also want to be able to create a shared folder for each student for submitting assignments. I can set up a shared folder with permissions for just a single user, and not the 'user360' account. The problem is, when I'm logged in as user360, and I try to open the 'StudentA', Windows never asks me for alternate credentials, it just refuses access because the user360 account is not allowed access. Can anyone suggest a fix for this?

    Read the article

  • Should I use an ssl terminator or just haproxy?

    - by Justin Meltzer
    I'm trying to figure out how to set up my architecture for a socket.io app that will require both https and wss connections. I've found many tutorials on the web suggesting that you use something like stud or stunnel in front of haproxy, which then routes your unencrypted traffic to your app. If I were to go this route, is it suggested that haproxy and the ssl terminator be on separate instances, or is it fine if they are on the same EC2 server instance? If I do not want to use a separate ssl terminator, could I use haproxy to terminate the ssl? Or instead would it be possible to proxy these https and wss connections to my application and have the node app terminate the ssl itself?

    Read the article

  • Proper Use Of HTML Data Attributes

    - by VirtuosiMedia
    I'm writing several JavaScript plugins that are run automatically when the proper HTML markup is detected on the page. For example, when a tabs class is detected, the tabs plugin is loaded dynamically and it automatically applies the tab functionality. Any customization options for the JavaScript plugin are set via HTML5 data attributes, very similar to what Twitter's Bootstrap Framework does. The appeal to the above system is that, once you have it working, you don't have worry about manually instantiating plugins, you just write your HTML markup. This is especially nice if people who don't know JavaScript well (or at all) want to make use of your plugins, which is one of my goals. This setup has been working very well, but for some plugins, I'm finding that I need a more robust set of options. My choices seem to be having an element with many data-attributes or allowing for a single data-options attribute with a JSON options object as a value. Having a lot of attributes seems clunky and repetitive, but going the JSON route makes it slightly more complicated for novices and I'd like to avoid full-blown JavaScript in the attributes if I can. I'm not entirely sure which way is best. Is there a third option that I'm not considering? Are there any recommended best practices for this particular use case?

    Read the article

  • How to remap media keys on laptop,without external programs?

    - by polemon
    As on most laptops, my laptop has "special" keys, or media keys, as they are sometimes called. On Linux, I can easily read keycodes with things like xev, put other functions back to it. Either with xmodmap or by simply giving the keys functions in my window manager. How is that possible on Windows? Is there even a way to scan those keys and remap them? Possibly even give them different functions as on Linux? If so, where do I do that? I heard of several programs atempting on changing that, but I don't see why I should install another program. I see it as integral part of the OS. For instance, I have a key with an 'i' in a circle. It normally opens a new browser window. When Chrome is already open, it makes the current tab go to my selected homepage (which is about:blank). I'd like to change the functionality of it.

    Read the article

  • Apache rewrite rules not causes a download dialog of the PHP file

    - by Shaihi
    I have Apache 2.2.17 using the WAMPServer 2.1 installation. I am debugging a website fully local on my computer. I have the following rule in the .htaccess: # Use PHP5 Single php.ini as default AddHandler application/x-httpd-php5s .php Options +FollowSymlinks RewriteEngine on Rewritebase / RewriteRule ^bella/(.*)/(.*)$ beauty.php?beauty_id=$1 [L] RewriteRule ^(argentina|brasil|chile|colombia|espana|mexico|rep_dominicana|uruguay|venezuela|peru|bolivia|cuba|ecuador|panama|paraguay|puerto_rico)/$ country.php?name=$1 [L] RewriteRule ^(argentina|brasil|chile|colombia|espana|mexico|rep_dominicana|uruguay|venezuela|peru|bolivia|cuba|ecuador|panama|paraguay|puerto_rico)/(hi5|facebook|twitter|orkut)/$ socialnetw.php?country=$1&category=$2 [L] The problem When I enable this rule and try to access http://localhost/index.php using FF I get a download dialog for the PHP file. If I comment the Rewrite* part in the .htaccess file then the index.php file loads fine, but navigation in the page is broken...

    Read the article

  • java.lang.OutOfMemoryError on ec2 machine

    - by vinchan
    I have a java app on a large instance that will spawn up to 800 threads. I can run the application fine as user "root" but not as another user which I created. I get the deadly. java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:657) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1325) nightmare. I have tried increasing the stack size already in limits.conf to no avail. Please, help me out. What is different here for the root and other user?

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • RDMA architecture - do you need adapters on both ends?

    - by Bobb
    I know Linux can use RDMA NICs like Solarflare... I just found Intel has something like that NetEffect cards. But Intel is talking all about clusters.. Can someone please explain. If I want low-latency networking and install RDMA NIC on my server. Is there limitation on where the cable can go? Is there a specific device expected on the other end? Is it special RDMA switch or RDMA adapter before switch or what? Why is this cluster talk? What if I want a single server with Windows (I can install HPC Windows or Windows 2008 R2)?

    Read the article

  • Are project managers useful in Scrum?

    - by Martin Wickman
    There are three roles defined in Scrum: Team, Product Owner and Scrum Master. There is no project manager, instead the project manager job is spread across the three roles. For instance: The Scrum Master: Responsible for the process. Removes impediments. The Product Owner: Manages and prioritizes the list of work to be done to maximize ROI. Represents all interested parties (customers, stakeholders). The Team: Self manage its work by estimating and distributing it among themselves. Responsible for meeting their own commitments. So in Scrum, there is no longer a single person responsible for project success. There is no command-and-control structure in place. That seems to baffle a lot of people, specifically those not used to agile methods, and of course, PM's. I'm really interested in this and what your experiences are, as I think this is one of the things that can make or break a Scrum implementation. Do you agree with Scrum that a project manager is not needed? Do you think such a role is still required? Why?

    Read the article

  • ASP.NET Connections Spring 2012 Talks and Code

    - by Stephen.Walther
    Thank you everyone who attended my ASP.NET Connections talks last week in Las Vegas. I’ve attached the slides and code for the three talks that I delivered:   Using jQuery to interact with the Server through Ajax – In this talk, I discuss the different ways to communicate information between browser and server using Ajax. I explain the difference between the different types of Ajax calls that you can make with jQuery. I also discuss the differences between the JavaScriptSerializer, the DataContractJsonSerializer, and the JSON.NET serializer.   ASP.NET Validation In-Depth – In this talk, I distinguish between View Model Validation and Domain Model Validation. I demonstrate how you can use the validation attributes (including the new .NET 4.5 validation attributes), the jQuery Validation library, and the HTML5 input validation attributes to perform View Model Validation. I then demonstrate how you can use the IValidatableObject interface with the Entity Framework to perform Domain Model Validation.   Using the MVVM Pattern with JavaScript Views – In this talk, I discuss how you can create single page applications (SPA) by taking advantage of the open-source KnockoutJS library and the ASP.NET Web API.   Be warned that the sample code is contained in Visual Studio 11 Beta projects. If you don’t have this version of Visual Studio, then you will need to open the code samples in Notepad. Also, I apologize for getting the code for these talks posted so slowly. I’ve been down with a nasty case of the flu for the past week and haven’t been able to get to a computer.

    Read the article

  • A Model for Planning Your Oracle BPM 10g Migration by Kris Nelson

    - by JuergenKress
    As the Oracle SOA Suite and BPM Suite 12c products enter beta, many of our clients are starting to discuss migrating from the Oracle 10g or prior platforms. With the BPM Suite 11g, Oracle introduced a major change in architecture with a strong focus on integration with SOA and an entirely new technology stack. In addition, there were fresh new UIs and a renewed business focus with an improved Process Composer and features like Adaptive Case Management. While very beneficial to both technology and the business, the fundamental change in architecture does pose clear migration challenges for clients who have made investments in the 10g platform. Some of the key challenges facing 10g customers include: Managing in-process instance migration and running multiple process engines Migration of User Interfaces and other code within the environment that may not be automated Growing or finding technical staff with both 10g and 12c experience Managing migration projects while continuing to move the business forward and meet day-to-day responsibilities As a former practitioner in a mixed 10g/11g shop, I wrestled with many of these challenges as we tried to plan ahead for the migration. Luckily, there is migration tooling on the way from Oracle and several approaches you can use in planning your migration efforts. In addition, you already have a defined and visible process on the current platform, which will be invaluable as you migrate.  A Migration ModelThis model presents several options across a value and investment spectrum. The goal of the AVIO Migration Model is to kick-start discussions within your company and assist in creating a plan of action to take advantage of the new platform. As with all models, this is a framework for discussion and certain processes or situations may not fit. Please contact us if you have specific questions or want to discuss migrations efforts in your situation. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Kris Nelson,ACM,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Simple recursive DNS resolver for debugging (app or VM)

    - by notpeter
    I have an issue which I believe is caused by incorrect DNS queries (doubled subdomains like _record.host.subdomain.tld.subdomain.tld) when querying for SRV records. So I need to an alternate DNS server with heavy logging so I can see every query (especially stupid ones), acting as a recursive resolver with the ability create records which override real DNS records so I can not only find the records it's (wrongly) looking for, but populate those records as well. I know I could install a DNS server on yet another linux box, but I feel like this is the sort of thing that someone may already setup a simple python script or single use vm just for this purpose.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Weindows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • How do I get my laptop's screen brightness settings back?

    - by Mason Wheeler
    From when I bought it up until yesterday, my laptop would dim the screen automatically if you let it sit for long enough, and then if you hit a key or moved the mouse, the brightness would jump back to what you had it at. I enjoy that behavior. It works well. Then I booted it up today, and found that now it will dim the screen automatically if you let it sit for long enough, but it doesn't restore the screen brightness. I have to do that manually, every single time, and it's getting really old really fast. It's an Alienware M17 running Windows 7 64-bit, in case that helps.

    Read the article

  • How to type french accents with a US Apple keyboard in Windows 7 (EN) with Boot Camp

    - by Nicolas
    Hi, Everything's in the question. I have an iMac running Snow Leopard with a US Apple keyboard, and I've installed Windows 7 (english) with boot camp. I need to be able to type French accents but I don't really anyway to do it. I tried the way I did on Snow Leopard with the ALT + E and E for instance to make an e acute and even the Windows way with ALT + 1 3 0 but no luck. Any of you have something to suggest to me? Cheers, Nicolas.

    Read the article

  • Apache, and logrotate instances

    - by OlivierDofus
    Hi! If I have one website and I want to rotate its logs, there's one instance of logrotate that is launched. There are as many logrotate instances launched as they are virtual websites. Here you can find mod_log_rotate, with a 1.3 version and (only) 2.0 version: http://www.hexten.net/wiki/index.php/Mod-log-rotate It's 6 years old. Is there something new, or maybe is there something like that in recent Apache versions? I didn't find anything like that, and I don't know if this code is still "usable" for recent Apache versions (2.2.x) Don't hesitate to edit my post to make it proper English, thanks a lot!

    Read the article

  • List SQL Server Instances using the Registry

    - by BuckWoody
    I read this interesting article on using PowerShell and the registry, and thought I would modify his information a bit to list the SQL Server Instances on a box. The interesting thing about listing instances this was is that you can touch remote machines, find the instances when they are off and so on. Anyway, here’s the scriptlet I used to find the Instances on my system: $MachineName = '.' $reg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $MachineName) $regKey= $reg.OpenSubKey("SOFTWARE\\Microsoft\\Microsoft SQL Server\\Instance Names\\SQL" ) $regkey.GetValueNames() You can read more of his article to find out the reason for the remote registry call and so forth – there are also security implications here for being able to read the registry. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 697 698 699 700 701 702 703 704 705 706 707 708  | Next Page >