Search Results

Search found 22040 results on 882 pages for 'process improvement'.

Page 473/882 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system. The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me. At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach. Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements. My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work. Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way. DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal. Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info: You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client. You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features. I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically. When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions: On the Choose the installation you want page of the installation wizard, I chose Server Farm. On the Server Type page, I chose Complete. At the end of the installation, I did not run the configuration wizard. Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end. It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective. I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed: Install SQL Server 2008 R2 to get a database engine instance installed. Run the SharePoint configuration wizard to set up the SharePoint databases. In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization. Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment. I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon. Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment. I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OS X SMB Connection to Windows Server 2008 R2

    - by Tawm
    I support many Macs that connect to an SMB share on a Windows Server 2008 R2 box. Occasionally I find that the Mac will try to connect and fail. The connection process will skip asking for credentials and there are none stored in keychain's password section. The server logs will show that the user tried to authenticate with invalid credentials. There are also no lingering connections from the server's point of view. The work around that I've found is to use an invalid username or a username that isn't the user's in the connection string for SMB so smb://domain;user@server/share instead of 'smb://server/share' this will force it to use the username I specified which the client doesn't have anything stored for. So it will then pop up the login window where the user changes the username to the correct one and user her password to connect happily. Specific computer in question: 15" MBP running Snow Leopard (10.6.7 or 10.6.8)

    Read the article

  • ADF Mobile Client is now Generally Available!

    - by joe.huang
    ADF Mobile Client is now generally available!  The press release went out this morning, and the ADF Mobile Client extensions can now be downloaded in the JDeveloper Update Center.  There is also a new Oracle Mobile Computing Strategy White Paper and Data Sheet available, for a high level overview of ADF Mobile. To get started with ADF Mobile Client development, please leverage the following resources: Oracle Technology Network ADF Mobile Landing Page: Review this page for all available resources for ADF Mobile development. Getting Start with ADF Mobile Client Demo: Short demo of the end-to-end development process. Tutorial for Mobile Application Development using ADF Mobile Client ADF Mobile Client Developer Guide ADF Mobile Client Samples: available in the JDeveloper Extension itself.  Located in <JDeveloper Install Location>/jdev/extensions/oracle.adfnmc.core/Samples directory.  Blogs will follow, describing each of the sample applications in more detail. Oracle Database Mobile Server: If database synchronization is needed, please follow this link to download/install Mobile Server. Leverage JDeveloper Forum for any ADF Mobile related questions. You will need the latest (11g Patch Set 3, or 11.1.1.4.0) version of JDeveloper to use this extension.  To download the ADF Mobile Client extension in JDeveloper, you would go to Help Menu, select “Check For Update”, and look for ADF Mobile Client extension in the Official Oracle Extensions and Updates center.  You can also directly download the extension from Oracle Technology Network. Check it out!  For any issues with accessing any of the links above, please contact me directly. Thanks, Joe Huang ([email protected])

    Read the article

  • Terribly slow Apache2 on VM Virtualbox

    - by cadavre
    I just launched VM Virtualbox with guest Ubuntu Server on host Windows 8. Both 64bit. Everything works perfectly fine. Maybe it's because I'm not using any X... Htop shows ~25% of memory usage, everything is fine, but not Apache2. Normally it's fine, but when I send request from my browser on host (networking mode set to Bridge mode), Apache2 is turning into 1-minute-long loading process with 100% CPU time. Any ideas how to debug it? Any ideas about solving this throat problem?

    Read the article

  • Can I use Ubuntu One to sync data fiies between two remote computers

    - by Sleepy John
    I've got two computers, both running Ubuntu with files in their home folders sync'd in to Ubuntu One. I'd like to know if it's possible to make Ubuntu One automatically download data changes that have been uploaded automatically to Ubuntu One from one computer to the equivalent data file in the other. Clarifying a bit further, I've installed Red Notebook in both computers and so they each have their own /.rednotebook/data folder containing a series of .txt files corresponding to the monthly entries in each of them. These are sync'd to upload any changes to those .txt files to Ubuntu One. My question is can I, and if so how, do I make Ubuntu One automatically download and replace those .txt files in the other computer after they've been updated and uploaded from the first computer? I did labouriously manage to download all those text files which had been uploaded from the first computer, from Ubuntu One one-by-one to the second computer, but what I want to do is automate this process and that's where I'm stuck. I'm aware that things could get a bit complicated if both my computers were on-line at the same time and both were simultaneously making different Red Notebook entries, so that's not the scenario I'm trying to cover. All I want to achieve is that whatever updates to the files have been uploaded by one computer, will automatically be downloaded to the same-named files in the other computer as soon as that second computer appears on line and detects that Ubuntu One has matching but more recent sync'd files than the ones it's holding.

    Read the article

  • No win 7 users available for login after dell datasafe factory reset

    - by user897052
    I created install discs using the Dell datasafe 2.0 backup utility in order to re-install windows on a friend's laptop (dell inspiron n5110). I ran the discs to do a factory reset. after the whole process, it booted, started loading windows 7, displayed the messages "setup is preparing your computer for first use" and "setup is checking video performance," and showed the login screen. However, there don't seem to be any active users on the machine - I opened a command prompt window to check the users on the machine. Using the command prompt (again, from the login window), i activated/enabled the administrator account, and even created another admin account, and upon logging in received several errors, couldn't load any mmc's, etc. any help would be appreciated.

    Read the article

  • Coldfusion deployment under Apache Tomcat with Virtual Hosts

    - by smalltiger85
    Hello, I'm looking for the proper way to share the Coldfusion engine for all my Virtual Hosts. This is how I have the virtual hosts configured on server.xml in the Tomcat conf: <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="cfusion/" reloadable="true" privileged="true" antiResourceLocking="false" anitJARLocking="false" allowLinking="true"/> </Host> <Host name="mysite1" appBase="webapps" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/mnt/webroot/mysite1" /> </Host> This is not valid for me because I'm instantiating one Coldfusion process for every virtual host, and it requires me to copy the WEB-INF folder in every site. My question is, is there any way to share the same Coldfusion instance for every virtual host, mantaining the sites webroot outside of the cfusion folder? I'm using Apache Tomcat 6 and Coldfusion Enterprise edition 8. Many thanks.

    Read the article

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

  • Activity monitor is unable to execute queries against server

    - by mika
    SQL Server Activity Monitor fails with an error dialog: TITLE: Microsoft SQL Server Management Studio The Activity Monitor is unable to execute queries against server [SERVER]. Activity Monitor for this instance will be placed into a paused state. Use the context menu in the overview pane to resume the Activity Monitor. ADDITIONAL INFORMATION: Unable to find SQL Server process ID [PID] on server [SERVER] (Microsoft.SqlServer.Management.ResourceMonitoring) I have this problem on SQL Server 2008 R2 x64 Developer Edition, but I think it is found in all 64bit systems using SQL Server 2008, under some yet unidentified conditions. There is a bug report on this in Microsoft Connect. It seems that the problem is not solved yet.

    Read the article

  • Installation stops with cmd.exe window on Laptop

    - by Saariko
    I am installing W7 Ent on an LG R580. I am working with a valid ISO (installs perfect on other systems). During the installation, before the installation window, the process hangs, and I get a black, cmd.exe screen with the following: Select Administrator: X:\windows\system32\cmd.exe Microsoft Windows [Version 6.1.7601] with a prompt for: X:\windows\system32 My keyboard at that time only prints capital letters with '^' before each. Only thing that I am able to do is reboot. In the bios, I tried to disable USB Legacy ( thinking the problem is with my DVD ) - did not help.

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • 10 Reasons Why Java is the Top Embedded Platform

    - by Roger Brinkley
    With the release of Oracle ME Embedded 3.2 and Oracle Java Embedded Suite, Java is now ready to fully move into the embedded developer space, what many have called the "Internet of Things". Here are 10 reasons why Java is the top embedded platform. 1. Decouples software development from hardware development cycle Development is typically split between both hardware and software in a traditional design flow . This leads to complicated co-design and requires prototype hardware to be built. This parallel and interdependent hardware / software design process typically leads to two or more re-development phases. With Embedded Java, all specific work is carried out in software, with the (processor) hardware implementation fully decoupled. This with eliminate or at least reduces the need for re-spins of software or hardware and the original development efforts can be carried forward directly into product development and validation. 2. Development and testing can be done (mostly) using standard desktop systems through emulation Because the software and hardware are decoupled it now becomes easier to test the software long before it reaches the hardware through hardware emulation. Emulation is the ability of a program in an electronic device to imitate another program or device. In the past Java tools like the Java ME SDK and the SunSPOTs Solarium provided developers with emulation for a complete set of mobile telelphones and SunSpots. This often included network interaction or in the case of SunSPOTs radio communication. What emulation does is speed up the development cycle by refining the software development process without the need of hardware. The software is fixed, redefined, and refactored without the timely expense of hardware testing. With tools like the Java ME 3.2 SDK, Embedded Java applications can be be quickly developed on Windows based platforms. In the end of course developers should do a full set of testing on the hardware as incompatibilities between emulators and hardware will exist, but the amount of time to do this should be significantly reduced. 3. Highly productive language, APIs, runtime, and tools mean quick time to market Charles Nutter probably said it best in twitter blog when he tweeted, "Every time I see a piece of C code I need to port, my heart dies a little. Then I port it to 1/4 as much Java, and feel better." The Java environment is a very complex combination of a Java Virtual Machine, the Java Language, and it's robust APIs. Combine that with the Java ME SDK for small devices or just Netbeans for the larger devices and you have a development environment where development time is reduced significantly meaning the product can be shipped sooner. Of course this is assuming that the engineers don't get slap happy adding new features given the extra time they'll have.  4. Create high-performance, portable, secure, robust, cross-platform applications easily The latest JIT compilers for the Oracle JVM approach the speed of C/C++ code, and in some memory allocation intensive circumstances, exceed it. And specifically for the embedded devices both ME Embedded and SE Embedded have been optimized for the smaller footprints.  In portability Java uses Bytecode to make the language platform independent. This creates a write once run anywhere environment that allows you to develop on one platform and execute on others and avoids a platform vendor lock in. For security, Java achieves protection by confining a Java program to a Java execution environment and not allowing it to access other parts of computer.  In variety of systems the program must execute reliably to be robust. Finally, Oracle Java ME Embedded is a cross-industry and cross-platform product optimized in release version 3.2 for chipsets based on the ARM architectures. Similarly Oracle Java SE Embedded works on a variety of ARM V5, V6, and V7, X86 and Power Architecture Linux. 5. Java isolates your apps from language and platform variations (e.g. C/C++, kernel, libc differences) This has been a key factor in Java from day one. Developers write to Java and don't have to worry about underlying differences in the platform variations. Those platform variations are being managed by the JVM. Gone are the C/C++ problems like memory corruptions, stack overflows, and other such bugs which are extremely difficult to isolate. Of course this doesn't imply that you won't be able to get away from native code completely. There could be some situations where you have to write native code in either assembler or C/C++. But those instances should be limited. 6. Most popular embedded processors supported allowing design flexibility Java SE Embedded is now available on ARM V5, V6, and V7 along with Linux on X86 and Power Architecture platforms. Java ME Embedded is available on system based on ARM architecture SOCs with low memory footprints and a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN). 7. Support for key embedded features (low footprint, power mgmt., low latency, etc) All embedded devices by there very nature are constrained in some way. Economics may dictate a device with a less RAM and ROM. The CPU needs can dictate a less powerful device. Power consumption is another major resource in some embedded devices as connecting to consistent power source not always desirable or possible. For others they have to constantly on. Often many of these systems are headless (in the embedded space it's almost always Halloween).  For memory resources ,Java ME Embedded can run in environment as low as 130KB RAM/350KB ROM for a minimal, customized configuration up to 700KB RAM/1500KB ROM for the full, standard configuration. Java SE Embedded is designed for environments starting at 32MB RAM/39MB  ROM. Key functionality of embedded devices such as auto-start and recovery, flexible networking are fully supported. And while Java SE Embedded has been optimized for mid-range to high-end embedded systems, Java ME Embedded is a Java runtime stack optimized for small embedded systems. It provides a robust and flexible application platform with dedicated embedded functionality for always-on, headless (no graphics/UI), and connected devices. 8. Leverage huge Java developer ecosystem (expertise, existing code) There are over 9 million developers in world that work on Java, and while not all of them work on embedded systems, their wealth of expertise in developing applications is immense. In short, getting a java developer to work on a embedded system is pretty easy, you probably have a java developer living in your subdivsion.  Then of course there is the wealth of existing code. The Java Embedded Community on Java.net is central gathering place for embedded Java developers. Conferences like Embedded Java @ JavaOne and the a variety of hardware vendor conferences like Freescale Technlogy Forums offer an excellent opportunity for those interested in embedded systems. 9. Easily create end-to-end solutions integrated with Java back-end services In the "Internet of Things" things aren't on an island doing an single task. For instance and embedded drink dispenser doesn't just dispense a beverage, but could collect money from a credit card and also send information about current sales. Similarly, an embedded house power monitoring system doesn't just manage the power usage in a house, but can also send that data back to the power company. In both cases it isn't about the individual thing, but monitoring a collection of  things. How much power did your block, subdivsion, area of town, town, county, state, nation, world use? How many Dr Peppers were purchased from thing1, thing2, thingN? The point is that all this information can be collected and transferred securely  (and believe me that is key issue that Java fully supports) to back end services for further analysis. And what better back in service exists than a Java back in service. It's interesting to note that on larger embedded platforms that support the Java Embedded Suite some of the analysis might be done on the embedded device itself as JES has a glassfish server and Java Database as part of the installation. The result is an end to end Java solution. 10. Solutions from constrained devices to server-class systems Just take a look at some of the embedded Java systems that have already been developed and you'll see a vast range of solutions. Livescribe pen, Kindle, each and every Blu-Ray player, Cisco's Advanced VOIP phone, KronosInTouch smart time clock, EnergyICT smart metering, EDF's automated meter management, Ricoh Printers, and Stanford's automated car  are just a few of the list of embedded Java implementation that continues to grow. Conclusion Now if your a Java Developer you probably look at some of the 10 reasons and say "duh", but for the embedded developers this is should be an eye opening list. And with the release of ME Embedded 3.2 and the Java Embedded Suite the embedded developers life is now a whole lot easier. For the Java developer your employment opportunities are about to increase. For both it's a great time to start developing Java for the "Internet of Things".

    Read the article

  • Increase application performance on Amazon AWS

    - by Honus Wagner
    I've got a client with an MVC v1 (.NET) application running on a micro instance. On this instance, I've got .NET, IIS 7.5, and MS SQL Server 2008 running to handle the application. The client has reported that it is taking nearly 10 seconds to process each request. Even loading the initial login page takes about that long, then logging in takes that long, etc etc. The currently running instance specs are as follows: 615 MB RAM Intel Xenon CPU E5430 @ 2.66GHz 2.78 GHz 64-Bit Is the memory availability the issue? or is it the processing power? I forsee two options: Change to a larget instance Set up a 2-tier architecture with two micro instances Which of these will give the application better performance? Thanks in advance.

    Read the article

  • cannot delete IPv6 default gateway

    - by NulledPointer
    The commands below should be pretty self-explanatory. Please note that the route for which i get failure is obtained by RA and has very less expiry ( e Flag in UDAe). @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592300sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1776sec @vm:~$ @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete default gw fe80::20c:29ff:fe87:f9e7 @vm:~$ ip -6 route 2001:4860:4001:800::1002 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1003 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:800::1005 via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 2001:4860:4001:803::100e via fe80::20c:29ff:fe87:f9e7 dev eth1 proto static metric 1024 fd00:ffff:ffff:fff1::/64 dev eth1 proto kernel metric 256 expires 2592279sec fe80::/64 dev eth1 proto kernel metric 256 default via fe80::20c:29ff:fe87:f9e7 dev eth1 proto kernel metric 1024 expires 1755sec @vm:~$ @vm:~$ @vm:~$ sudo route -6 delete ::/0 gw fe80::20c:29ff:fe87:f9e7 dev eth1 SIOCDELRT: No such process @vm:~$ @vm:~$ @vm:~$ route -n6 Kernel IPv6 routing table Destination Next Hop Flag Met Ref Use If 2001:4860:4001:800::1002/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1003/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:800::1005/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 2001:4860:4001:803::100e/128 fe80::20c:29ff:fe87:f9e7 UG 1024 0 0 eth1 fd00:ffff:ffff:fff1::/64 :: UAe 256 0 0 eth1 fe80::/64 :: U 256 0 0 eth1 ::/0 fe80::20c:29ff:fe87:f9e7 UGDAe 1024 0 0 eth1 ::/0 :: !n -1 1 349 lo ::1/128 :: Un 0 1 3 lo fd00:ffff:ffff:fff1:a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo fd00:ffff:ffff:fff1:fce8:ce07:b9ea:389f/128 :: Un 0 1 0 lo fe80::a00:27ff:fe7f:7245/128 :: Un 0 1 0 lo ff00::/8 :: U 256 0 0 eth1 ::/0 :: !n -1 1 349 lo @vm:~$ UPDATE: Another question is whats the use of link local address as the default route?

    Read the article

  • How to change MySQL data directory?

    - by Jonathan Frank
    I want to place my databases in another directory, so I can store them in an ESB (elastic block storage, just a fancy name for a virtualized harddisk) together with my web-apps and other persistent data. I have tried to walk through a tutorial at http://crashmag.net/change-the-default-mysql-data-directory-with-selinux-enabled. Everything seems fine until I type this command: # semanage fcontext -a -t mysqld_db_t "/srv/mysql(/.*)?" Then the command fails and tells me that mysqld_db_t is an invalid SELinux context even if the default MySQL data directory is labelled with this context. I am running Fedora 15 on Virtualbox (behaves like an ordinary x86-compatible box) and Amazon EC2 (based on Xen) so the tutorial should be compatible. It is also worth to mention that turning off SELinux globally or just for the MySQL process is not an option, because such a solution will decrease the security of the system if a hacker gains access to the system via the MySQL server. I have never seen this problem before I changed to the Redhat/Fedora architecture, so it could be a distribution specific issue. Any help is highly appreciated

    Read the article

  • Oracle Expands Sun Blade Portfolio for Cloud and Highly Virtualized Environments

    - by Ferhat Hatay
    Oracle announced the expansion of Sun Blade Portfolio for cloud and highly virtualized environments that deliver powerful performance and simplified management as tightly integrated systems.  Along with the SPARC T3-1B blade server, Oracle VM blade cluster reference configuration and Oracle's optimized solution for Oracle WebLogic Suite, Oracle introduced the dual-node Sun Blade X6275 M2 server module with some impressive benchmark results.   Benchmarks on the Sun Blade X6275 M2 server module demonstrate the outstanding performance characteristics critical for running varied commercial applications used in cloud and highly virtualized environments.  These include best-in-class SPEC CPU2006 results with the Intel Xeon processor 5600 series, six Fluent world records and 1.8 times the price-performance of the IBM Power 755 running NAMD, a prominent bio-informatics workload.   Benchmarks for Sun Blade X6275 M2 server module  SPEC CPU2006  The Sun Blade X6275 M2 server module demonstrated best in class SPECint_rate2006 results for all published results using the Intel Xeon processor 5600 series, with a result of 679.  This result is 97% better than the HP BL460c G7 blade, 80% better than the IBM HS22V blade, and 79% better than the Dell M710 blade.  This result demonstrates the density advantage of the new Oracle's server module for space-constrained data centers.     Sun Blade X6275M2 (2 Nodes, Intel Xeon X5670 2.93GHz) - 679 SPECint_rate2006; HP ProLiant BL460c G7 (2.93 GHz, Intel Xeon X5670) - 347 SPECint_rate2006; IBM BladeCenter HS22V (Intel Xeon X5680)  - 377 SPECint_rate2006; Dell PowerEdge M710 (Intel Xeon X5680, 3.33 GHz) - 380 SPECint_rate2006.  SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/24/2010 and this report.    For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Fluent The Sun Fire X6275 M2 server module produced world-record results on each of the six standard cases in the current "FLUENT 12" benchmark test suite at 8-, 12-, 24-, 32-, 64- and 96-core configurations. These results beat the most recent QLogic score with IBM DX 360 M series platforms and QLogic "Truescale" interconnects.  Results on sedan_4m test case on the Sun Blade X6275 M2 server module are 23% better than the HP C7000 system, and 20% better than the IBM DX 360 M2; Dell has not posted a result for this test case.  Results can be found at the FLUENT website.   ANSYS's FLUENT software solves fluid flow problems, and is based on a numerical technique called computational fluid dynamics (CFD), which is used in the automotive, aerospace, and consumer products industries. The FLUENT 12 benchmark test suite consists of seven models that are well suited for multi-node clustered environments and representative of modern engineering CFD clusters. Vendors benchmark their systems with the principal objective of providing comparative performance information for FLUENT software that, among other things, depends on compilers, optimization, interconnect, and the performance characteristics of the hardware.   FLUENT application performance is representative of other commercial applications that require memory and CPU resources to be available in a scalable cluster-ready format.  FLUENT benchmark has six conventional test cases (eddy_417k, turbo_500k, aircraft_2m, sedan_4m, truck_14m, truck_poly_14m) at various core counts.   All information on the FLUENT website (http://www.fluent.com) is Copyrighted1995-2010 by ANSYS Inc. Results as of November 24, 2010. For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   NAMD Results on the Sun Blade X6275 M2 server module running NAMD (a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems) show up to a 1.8X better price/performance than IBM's Power 7-based system.  For space-constrained environments, the ultra-dense Sun Blade X6275 M2 server module provides a 1.7X better price/performance per rack unit than IBM's system.     IBM Power 755 4-way Cluster (16U). Total price for cluster: $324,212. See IBM United States Hardware Announcement 110-008, dated February 9, 2010, pp. 4, 21 and 39-46.  Sun Blade X6275 M2 8-Blade Cluster (10U). Total price for cluster:  $193,939. Price/performance and performance/RU comparisons based on f1ATPase molecule test results. Sun Blade X6275 M2 cluster: $3,568/step/sec, 5.435 step/sec/RU. IBM Power 755 cluster: $6,355/step/sec, 3.189 step/sec/U. See http://www-03.ibm.com/systems/power/hardware/reports/system_perf.html. See http://www.ks.uiuc.edu/Research/namd/performance.html for more information, results as of 11/24/10.   For more specifics about these results, please go to see http://blogs.sun.com/BestPerf   Reverse Time Migration The Reverse Time Migration is heavily used in geophysical imaging and modeling for Oil & Gas Exploration.  The Sun Blade X6275 M2 server module showed up to a 40% performance improvement over the previous generation server module with super-linear scalability to 16 nodes for the 9-Point Stencil used in this Reverse Time Migration computational kernel.  The balanced combination of Oracle's Sun Storage 7410 system with the Sun Blade X6275 M2 server module cluster showed linear scalability for the total application throughput, including the I/O and MPI communication, to produce a final 3-D seismic depth imaged cube for interpretation. The final image write time from the Sun Blade X6275 M2 server module nodes to Oracle's Sun Storage 7410 system achieved 10GbE line speed of 1.25 GBytes/second or better performance. Between subsequent runs, the effects of I/O buffer caching on the Sun Blade X6275 M2 server module nodes and write optimized caching on the Sun Storage 7410 system gave up to 1.8 GBytes/second effective write performance. The performance results and characterization of this Reverse Time Migration benchmark could serve as a useful measure for many other I/O intensive commercial applications. 3D VTI Reverse Time Migration Seismic Depth Imaging, see http://blogs.sun.com/BestPerf/entry/3d_vti_reverse_time_migration for more information, results as of 11/14/2010.                            

    Read the article

  • Regarding Reinstalling PostgreSQL

    - by Vivalavista
    I was using PostgreSQL 8.4. I tried removing it through Synaptic Manager and then I tried to install 9.1, but I still version 8.4. I deleted all the files associated with postgresql. Now I am unable to install any version of PostgreSQL. When I try I get this error: Setting up postgresql-9.1 (9.1.3-1~lucid) ... .: 12: Can't open /usr/share/postgresql-common/maintscripts-functions dpkg: error processing postgresql-9.1 (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of postgresql: postgresql depends on postgresql-9.1; however: Package postgresql-9.1 is not configured yet. dpkg: error processing postgresql (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: postgresql-9.1 postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) Please tell me the way to remove postgres completely so I can install a fresh version.

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • I can't run uwsgi as normal user

    - by atomAltera
    I want to run uwsgi server as www user, but if I write: uwsgi --socket $SOCKET --chmod-socket 666 --pidfile $PIDFILE --daemonize $LOGFILE --chdir $CHDIR --pp $PYTHONPATH --module main --post-buffering 8192 --workers 1 --threads 10 --uid www --gid www A socket creation error occurs: Log: 1 *** Starting uWSGI 1.4.1 (64bit) on [Mon Dec 10 22:15:23 2012] *** 2 compiled with version: 4.4.5 on 17 November 2012 23:31:14 3 os: Linux-2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 4 nodename: autoblog 5 machine: x86_64 6 clock source: unix 7 pcre jit disabled 8 detected number of CPU cores: 2 9 current working directory: / 10 writing pidfile to /tmp/uwsgi_mysite.pid 11 detected binary path: /usr/local/bin/uwsgi 12 setgid() to 1002 13 set additional group 1004 (files) 14 setuid() to 1002 15 *** WARNING: you are running uWSGI without its master process manager *** 16 your memory page size is 4096 bytes 17 detected max file descriptor number: 1024 18 lock engine: pthread robust mutexes 19 unlink(): Operation not permitted [core/socket.c line 109] 20 bind(): Address already in use [core/socket.c line 141]

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by Brian Dayton
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage.   2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.."   3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what.   Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.  

    Read the article

  • Do Repeat Yourself in Unit Tests

    - by João Angelo
    Don’t get me wrong I’m a big supporter of the DRY (Don’t Repeat Yourself) Principle except however when it comes to unit tests. Why? Well, in my opinion a unit test should be a self-contained group of actions with the intent to test a very specific piece of code and should not depend on externals shared with other unit tests. In a typical unit test we can divide its code in two major groups: Preparation of preconditions for the code under test; Invocation of the code under test. It’s in the first group that you are tempted to refactor common code in several unit tests into helper methods that can then be called in each one of them. Another way to not duplicate code is to use the built-in infrastructure of some unit test frameworks such as SetUp/TearDown methods that automatically run before and after each unit test. I must admit that in the past I was guilty of both charges but what at first seemed a good idea since I was removing code duplication turnout to offer no added value and even complicate the process when a given test fails. We love unit tests because of their rapid feedback when something goes wrong. However, this feedback requires most of the times reading the code for the failed test. Given this, what do you prefer? To read a single method or wander through several methods like SetUp/TearDown and private common methods. I say it again, do repeat yourself in unit tests. It may feel wrong at first but I bet you won’t regret it later.

    Read the article

  • Problem with shared ssh keys

    - by warren
    Following the process I've used in other environments, I've tried setting-up shared keys between my Mac and my CentOS 4 webserver. I've seen the same problem with my older Ubuntu 7.10 workstation trying to connect via keys to the same webserver. I have tried both dsa and rsa keytypes (sshkeygen -t <type>). The sshd_config file on my webserver seems to be allowing key-based logins: RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys And my .ssh/authorized_keys has my dsa and rsa keys added. Where should I be looking for what to change next to make key-based logins "Just Work™"? Is it related to the line, #UseDNS yes and sshd is trying to do a reverse-lookup on my IP, but cannot because it's NAT'd?

    Read the article

  • Getting Partial / No Redundancy on VM's created on latest datastore

    - by Germano
    Hi, First some background. I'm in the process of upgrading my ESX servers from 3.5 to vSphere 4 and so far I have setup the new vCenter Server. Before I start the upgrade of the ESX, I needed more storage so I created 3 datastores from available space on my Equallogic PS6000 which has been connected for a while so as far as connectivity, nothing has changed. but now here's my problem, I get a "Partial / No Redundancy" on any VM that I create in any of these new datastores. I can create VM's on any of the older datstores on LUN's from exactly the same Equallogic and it works fine, but not the new ones. Keep in mind that these new datastores are the only ones that were created under the new vCenter, so I believe it must have something to do with it. Is anyone aware of any issues about creating datastored using the new vCenter but on a 3.5 ESX host? ISCSI with QLogic QLE406x Thanks in advance for nay help. Germano

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 31 (sys.dm_server_services)

    - by Tamarick Hill
    The last DMV for this month long blog session is the sys.dm_server_services DMV. This DMV returns information about your SQL Server, Full-Text, and SQL Server Agent related services. To further illustrate the information this DMV contains, lets run it against our Training instance that we have been using for this blog series. SELECT * FROM sys.dm_server_services The first column returned by this DMV is the actual Service Name. The next columns are the startup_type and startup_type_desc columns which display your chosen method for how a particular method should be started. The next columns status and status_desc display the current status for each of your Services on the instance. The process_id column represents the server process id. The last_startup_time column gives you the last time that a particular service was started. The service_account column provides you with the name of the account that is used to control the service. The filename column gives you the full path to the executable for the service. Lastly we have the is_clustered column and the cluster_nodename which indicates whether or not a particular service is clustered and is part of a resource cluster group, and if so, the cluster node that the service is installed on. This is a good DMV to provide you with a quick snapshot view of the current SQL Server services you have on your instance. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204542.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >