Search Results

Search found 7897 results on 316 pages for 'generate'.

Page 51/316 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • SQL SERVER – Load Generator – Free Tool From CodePlex

    - by pinaldave
    One of the most common questions I receive is if there any tool available to generate load on SQL Server. Absolutely there is a fabulous free tool available to generate load on SQL Server on Codeplex. This tool was released in 2008 but it is still extremely relevant to generate the load on SQL Server as well works fabulously. CodePlex is a project initiated by Microsoft for hosting open source softwares. The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. One of the things which I felt needed improvement was a default configuration. As every single time when I was adding a query the default settings were showing up and I had to manually change that. However, when I went to Menu >> Tools >>Options I was really happy as it has options to change every single default which is available. Here one can give default username, password, database name as well various settings related to configuration. Additionally application logging is also possible through the options. A couple of other important points I noticed was the button to reset counters as well status bar containing useful information of Total Threads, Completed Queries and Failed Queries. I use this frequently for my load testing. What tool do you use for SQL Server Load Generator? Download SQL Load Generator Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • 2013 U.S. GAAP Financial Reporting Taxonomy Available for Public Review and Comment

    - by Theresa Hickman
    FASB recently released the proposed 2013 U.S. GAAP Reporting Taxonomy. Comments are due October 29, 2012 to be finalized and published early 2013.  The proposed 2013 U.S. GAAP taxonomy and instructions on how to submit comments are available at the FASB’s XBRL page. In previous blog entries, I talked about how Oracle Hyperion Disclosure Management supports the latest taxonomy, enabling financial managers to easily comply with the latest filing requirements. The taxonomy is a list of computer-readable tags in XBRL that allows companies to annotate the voluminous financial data that is included in typical long-form financial statements and related footnote disclosures. The tags allow computers to automatically search for, assemble, and process data so it can be readily accessed and analyzed by investors, analysts, journalists, and regulators. You do not have to have Oracle Hyperion Financial Management, used for consolidating financial results, to generate XBRL. You just need Oracle Hyperion Disclosure Management to generate XBRL instance documents from financial applications, such as Oracle E-Business Suite, Oracle PeopleSoft, Oracle JD Edwards EnterpriseOne, and Oracle Fusion General Ledger. To generate XBRL tags and complete SEC filings using your existing financial applications with Oracle Hyperion Disclosure Management, here are the steps: Download the XBRL taxonomy from the SEC or XBRL Website into Hyperion Disclosure Management to create a company taxonomy. Publish financial statements from the general ledger to Microsoft Excel or Microsoft Word. Create the SEC filing in the Microsoft programs and perform the XBRL tag mapping in Oracle Hyperion Disclosure Management. Ensure that the SEC filing meets XBRL and SEC EDGAR Filer Manual validation requirements. Validate and submit the company taxonomy and XBRL instance document to the SEC. Get more details about Oracle Hyperion Disclosure Management.

    Read the article

  • Having troubles with LibNoise.XNA and generating tileable maps

    - by Jon
    Following up on my previous post, I found a wonderful port of LibNoise for XNA. I've been working with it for about 8 hours straight and I'm tearing my hair out - I just can not get maps to tile, I can't figure out how to do this. Here's my attempt: Perlin perlin = new Perlin(1.2, 1.95, 0.56, 12, 2353, QualityMode.Medium); RiggedMultifractal rigged = new RiggedMultifractal(); Add add = new Add(perlin, rigged); // Initialize the noise map int mapSize = 64; this.m_noiseMap = new Noise2D(mapSize, perlin); //this.m_noiseMap.GeneratePlanar(0, 1, -1, 1); // Generate the textures this.m_noiseMap.GeneratePlanar(-1,1,-1,1); this.m_textures[0] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(mapSize, mapSize * 2, mapSize, mapSize * 2); this.m_textures[1] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); this.m_noiseMap.GeneratePlanar(-1, 1, -1, 1); this.m_textures[2] = this.m_noiseMap.GetTexture(this.graphics.GraphicsDevice, Gradient.Grayscale); The first and third ones generate fine, they create a perlin noise map - however the middle one, which I wanted to be a continuation of the first (As per my original post), is just a bunch of static. How exactly do I get this to generate maps that connect to each other, by entering in the mapsize * tile, using the same seed, settings, etc.?

    Read the article

  • On Screen Coin Animation

    - by Siddharth
    am working with side scrolling skater game. I want to perform coin animation such that as player collect coin it moves upside and attach with currency sprite. My main character and coin present in game scene and currency sprite present in HUD layer. This situation creates problem for me. Directly I can not apply modifier to coin because it is side scrolling game so based on main character speed it reaches at different position. That I have checked. So that I have to generate other coin at same position at game layer coin has, in HUD layer and move upward to it. But I didn't able to get its y position correct though I can able to get x position correctly. Many time main character goes downward so it get minus value many time. I also tried following code float[] position = GameHUD.this .convertSceneCoordinatesToLocalCoordinates(GameManager .getInstance().getCoinX(), GameManager.getInstance() .getCoinY()); But I am getting same coordinate as I provide. No difference in that so please some one provide me guidance in that. Because I am near to complete my game. EDIT: Here game layer and hud layer is totally different. Actual coin present in game layer which player has to collect and at same position I want to generate another coin in hud layer to perform some animation. It is recommended to generate coin in hud layer because through that only I can able to complete my target.

    Read the article

  • Help me with this logic (newbie) [migrated]

    - by Surendra
    I need to generate a half pyramid number series with the entered starting number and the number of lines in a html page using Javascript and show the result in html page . I have done the Java scripting and stuff . What I don't get is the logic to it. Take a look at this you may get an idea what I'm talking about: Here is my function in Javascript that will be triggered on a button click function doFunction(){ var enteredNumber=document.getElementById("start"); var lines=document.getElementById("lines"); var result; for(i=0;i<=lines.value;i++) { for(j=enteredNumber.value;j<=i;j++) { document.write(j + "&nbsp;" + "&nbsp;"); } document.write("<br />"); } } Help me with the logic to print following order: 1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 There is a condition. I will specify $start and $lines. If $start = 5 and $lines = 3 then output should be like: 5 5 6 5 6 7 I have had used the for loop , but that doesn't work if I give my own start number that is higher than the number of lines. I actually need it done with Javascript, I have had done the necessary but I'm confused with the logic to generate such series (with the user given values) I had actually used two for loops to generate the regular number series like below 1 1 2 1 2 3 and so on.

    Read the article

  • Generating random tunnels

    - by IVlad
    What methods could we use to generate a random tunnel, similar to the one in this classic helicopter game? Other than that it should be smooth and allow you to navigate through it, while looking as natural as possible (not too symmetric but not overly distorted either), it should also: Most importantly - be infinite and allow me to control its thickness in time - make it narrower or wider as I see fit, when I see fit; Ideally, it should be possible to efficiently generate it with smooth curves, not rectangles as in the above game; I should be able to know in advance what its bounds are, so I can detect collisions and generate powerups inside the tunnel; Any other properties that let you have more control over it or offer optimization possibilities are welcome. Note: I'm not asking for which is best or what that game uses, which could spark extended discussion and would be subjective, I'm just asking for some methods that others know about or have used before or even think they might work. That is all, I can take it from there. Also asked on stackoverflow, where someone suggested I should ask here too. I think it fits in both places, since it's as much an algorithm question as it is a gamedev question, IMO.

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • PDF Report generation

    - by IniTech
    EDIT : I completed this project using ABCpdf. For anyone interested, I love this product and their support is A+. Everything I listed as a 'Con' for the HTML - PDF solution was easily doable in ABCpdf. I've been charged with creating a data driven pdf report. After reviewing the plethora of options, I have narrowed it down to 2. I need you all to to help me decide, or offer alternatives I haven't considered. Here are the requirements: 100% Data driven Eventually PDF (a stop in HTML is fine, so long as it is converted) Can be run with multiple sets of data (the layout is always the same, the data is variable) Contains normal analysis-style copy (saved in DB with html markup) Contains tables (data for tables is generated at run-time) Header/Page # on each page Table of Contents .NET (VB or C#) Done quickly Now, because of the fact that the report is going to be generated with multiple sets of data, I don't think a stamped pdf template will work since I won't know how long or how many pages a certain piece of the report could require. So, I think my best options are: Programmatic creation using an iText-like solution. Generate in HTML and convert to PDF using a third-party application (ABCPdf is the tool I have played with so far) Both solutions have their pro's and con's. Programmatic solution: Pros: Flexible Easy page numbering/page header/table of contents Free Cons: Time consuming (to write a layer on top of iText to do what I need and keep maintainable) Since the copy is already stored in the db with html markup, I would have to parse through the data before I place it into the pdf, ensuring I don't have to break the paragraph into chunks so I can apply bold, italic, underline, etc. to specific phrases. This seems like a huge PITA, and I hope I am wrong about that assumption. HTML - PDF Pros: Easy to generate from db (no parsing necessary) Many tools for conversion Uses technology I am already familiar with Built-in "Print Preview" - not a req, but nice Cons: (Edited after project completion. All of my assumptions were incorrect and ABCpdf is awesome) 1. Almost impossible to generate page headers - Not True 2. Very difficult to generate page numbers Not True 3. Nearly impossible to generate table of contents Not True 4. (Cross-browser support isn't a con; Since its internal, I can dictate what browser to use) 5. Conversion tool quirks - may not convert exactly as rendered in browser Not True 6. Overall, I think it would be very hard to format the HTML exactly as I would want it to appear/convert to PDF. Not True That's it - I need the communitys help in deciding which way I should go. I might be wrong about some of my Pro/Con assumptions. If I am, please tell me. All thoughts and suggestions are welcome and appreciated. Thanks

    Read the article

  • nMoq basic questions

    - by devoured elysium
    I made the following test for my class: var mock = new Mock<IRandomNumberGenerator>(); mock.Setup(framework => framework.Generate(0, 50)) .Returns(7.0) var rnac = new RandomNumberAverageCounter(mock.Object, 1, 100); rnac.Run(); double result = rnac.GetAverage(); Assert.AreEqual(result, 7.0, 0.1); The problem here was that I changed my mind about what range of values Generate(int min, int max) would use. I ran the test and it failed. I know that that is what it's supposed to happen but I was left wondering if isn't there a way to launch an exception or throw in a message when trying to run the method params. Also, if I want to run this Generate() method 10 times with different values (let's say, from 1 to 10), will I have to make 10 mock setups or something, or is there a special method for it? The best I could think of is this (which isn't bad, I'm just asking if there is other better way): for (int i = 1; i < 10; ++i) { mock.Setup(framework => framework.Generate(1, 100)) .Returns((double)i); } Thanks

    Read the article

  • JPA/JDO entity to XML XSD generator.

    - by h2g2java
    I am using JDO or JPA on GAE plugin in Eclipse. I am using smartgwt datasource, accepting an xsd. I would like to be educated how to generate an XSD from my jdo/jpa entity, vice versa. Is there a tool to do it? While datanucleas does all its magic enhancing in the Eclipse background, would I be able to somehow operate in a mode that would generate XSDs for me? Can Hibernate operate in an offline mode, to solely help me generate XSDs which I could use in GWT without deploying hibernate with my web-app? Can Hibernate even be capable of generating XSDs from entities, vice versa? Currently, I am about to write a utility to generate an xsd, given an entity class - but I am hoping I don't have to reinvent the wheel if it already exists. I am hoping people here could educate me on any available tools to ease my XSD generation. But btw, I am very wary of anything that uses Maven, because most people (like Spring) who write the Maven scripts and pom don't have the expertise to write it in a way that would spew out messages and verbosity appropriately to make it easy for me to locate model errors.

    Read the article

  • [Apache Camel] Generating multiple files based on DB query in a nice way

    - by chantingwolf
    Dear all, I have a following question. I have to generate many files based on sql query. Let's say for example, I have get from database a list of orders made today and genarate file for each order and later store each file on ftp. Ideally I would like to get follewing. Not quite sure how to get it. from(MyBean).to(Ftp) The problem and main question is how to generate multiple messages by custom bean (for example). I am not sure if splitter EIP is ok in this case because in my case I have not just one message to split, but I just have to generate and send many messages. http://camel.apache.org/splitter.html I hope, someone meet this problem before. If the task will be to generate just one file - everything is quite simple - you need just fill Exchange.OutMessage (or something like this). But what about multiple files - I really can't get, how to manage this situation. P.S. Sorry if this question is stupid. I am novice in Camel (working with it just for coupe weeks). It's a great tool. Actually, that's why I want to use in in the best way. Thanks a lot.

    Read the article

  • Generating All Permutations of Character Combinations when # of arrays and length of each array are

    - by Jay
    Hi everyone, I'm not sure how to ask my question in a succinct way, so I'll start with examples and expand from there. I am working with VBA, but I think this problem is non language specific and would only require a bright mind that can provide a pseudo code framework. Thanks in advance for the help! Example: I have 3 Character Arrays Like So: Arr_1 = [X,Y,Z] Arr_2 = [A,B] Arr_3 = [1,2,3,4] I would like to generate ALL possible permutations of the character arrays like so: XA1 XA2 XA3 XA4 XB1 XB2 XB3 XB4 YA1 YA2 . . . ZB3 ZB4 This can be easily solved using 3 while loops or for loops. My question is how do I solve for this if the # of arrays is unknown and the length of each array is unknown? So as an example with 4 character arrays: Arr_1 = [X,Y,Z] Arr_2 = [A,B] Arr_3 = [1,2,3,4] Arr_4 = [a,b] I would need to generate: XA1a XA1b XA2a XA2b XA3a XA3b XA4a XA4b . . . ZB4a ZB4b So the Generalized Example would be: Arr_1 = [...] Arr_2 = [...] Arr_3 = [...] . . . Arr_x = [...] Is there a way to structure a function that will generate an unknown number of loops and loop through the length of each array to generate the permutations? Or maybe there's a better way to think about the problem? Thanks Everyone!

    Read the article

  • Cucumber-rails on jruby installs gem into my apps root directory with bundler

    - by brad
    Just installed cucumber 0.7.2 and cucumber-rails 0.3.1 with jruby-1.4.0 on OSX. When I run a bundle install, it places a cucumber-rails directory in my main app with all of the gem code/dependencies. First off, this is definitely not what I want and I'm not sure why this happens for cucumber-rails only. Second, if I delete this folder and just manually install cucumber-rails, when I run script/generate feature blah I get /Users/bradrobertson/.rvm/rubies/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:344:in `refresh!': source index not created from disk (RuntimeError) from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:34:in `refresh!' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:29:in `initialize' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `new' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `add_frozen_gem_path' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:298:in `add_gem_load_paths' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:132:in `process' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:13 from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:1:in `require' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:1 from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:3:in `require' from script/generate:3 Similarly running rake cucumber I get rake aborted! source index not created from disk So something obviously doesn't work. If I add that cucumber-rails directory back in, then my rake cucumber actually runs. Can someone tell me why it would need to install the gem right in my rails app? I've never seen this before. setup jruby-1.4.0 cucumber-0.7.2 cucumber-rails 0.3.1 bundler 0.9.23 webrat 0.7.1

    Read the article

  • Creating a smart text generator

    - by royrules22
    I'm doing this for fun (or as 4chan says "for teh lolz") and if I learn something on the way all the better. I took an AI course almost 2 years ago now and I really enjoyed it but I managed to forget everything so this is a way to refresh that. Anyway I want to be able to generate text given a set of inputs. Basically this will read forum inputs (or maybe Twitter tweets) and then generate a comment based on the learning. Now the simplest way would be to use a Markov Chain Text Generator but I want something a little bit more complex than that as the MKC basically only learns by word order (which word is more likely to appear after word x given the input text). I'm trying to see if there's something I can do to make it a little bit more smarter. For example I want it to do something like this: Learn from a large selection of posts in a message board but don't weight it too much For each post: Learn from the other comments in that post and weigh these inputs higher Generate comment and post See what other users' reaction to your post was. If good weigh it positively so you make more posts that are similar to the one made, and vice versa if negative. It's the weighing and learning from mistakes part that I'm not sure how to implement. I thought about Artificial Neural Networks (mainly because I remember enjoying that chapter) but as far as I can tell that's mainly used to classify things (i.e. given a finite set of choices [x1...xn] which x is this given input) not really generate anything. I'm not even sure if this is possible or if it is what should I go about learning/figuring out. What algorithm is best suited for this? To those worried that I will use this as a bot to spam or provide bad answers to SO, I promise that I will not use this to provide (bad) advice or to spam for profit. I definitely will not post it's nonsensical thoughts on SO. I plan to use it for my own amusement. Thanks!

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • How do you verify that 2 copies of a VB 6 executable came from the same code base?

    - by Tim Visher
    I have a program under version control that has gone through multiple releases. A situation came up today where someone had somehow managed to point to an old copy of the program and thus was encountering bugs that have since been fixed. I'd like to go back and just delete all the old copies of the program (keeping them around is a company policy that dates from before version control was common and should no longer be necessary) but I need a way of verifying that I can generate the exact same executable that is better than saying "The old one came out of this commit so this one should be the same." My initial thought was to simply MD5 hash the executable, store the hash file in source control, and be done with it but I've come up against a problem which I can't even parse. It seems that every time the executable is generated (method: Open Project. File Make X.exe) it hashes differently. I've noticed that Visual Basic messes with files every time the project is opened in seemingly random ways but I didn't think that would make it into the executable, nor do I have any evidence that that is indeed what's happening. To try to guard against that I tried generating the executable multiple times within the same IDE session and checking the hashes but they continued to be different every time. So that's: Generate Executable Generate MD5 Checksum: md5sum X.exe > X.md5 Verify MD5 for current executable: md5sum -c X.md5 Generate New Executable Verify MD5 for new executable: md5sum -c X.md5 Fail verification because computed checksum doesn't match. I'm not understanding something about either MD5 or the way VB 6 is generating the executable but I'm also not married to the idea of using MD5. If there is a better way to verify that two executables are indeed the same then I'm all ears. Thanks in advance for your help!

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • Generating python wrapper for 3ed party c++ dll using swig with

    - by MuraliK
    I am new bee to swig. I have a third party c++ dll with the following functions export. I want to call these dll functions in python. So thought of using swig to generate the wrapper using swig. I am not sure what sort of wrapper i need to generate (do i need to generate .lib or .dll to use it in python?). In case i need to generate .dll how do i do that using visual studio 2010. There are some call back function like SetNotifyHandler(void (__stdcall * nf)(int wp, void *lp)) in the bellow list. How do define such function in interface file. can someone help me plese? enter code here #ifndef DLL_H #define DLL_H #ifdef DLL_BUILD #define DLLFUNC __declspec(dllexport) #else #define DLLFUNC __declspec(dllimport) #endif #pragma pack(push) #pragma pack(1) #pragma pack(pop) extern "C" { DLLFUNC int __stdcall StartServer(void); DLLFUNC int __stdcall GetConnectionInfo(int connIndex, Info *buf); DLLFUNC void __stdcall SetNotifyWindow(HWND nw); DLLFUNC void __stdcall SetNotifyHandler(void (__stdcall * nf)(int wp, void *lp)); DLLFUNC int __stdcall SendCommand(int connIndex, Command *cmd); };

    Read the article

  • Share and Deliver BI Publisher Reports in Multiple Languages

    - by kanichiro.nishida
    When you share your reports with someone who speak and read in different languages you want your reports to be shown in their language, right ? Well, translating reports with BI Publisher is not only easy but also reduces the maintenance cost a lot. Many of us in the BI Publisher product development team used to work in Globalization and Multi Lingual support, which enables Oracle products and applications to be used in many different languages and countries and territories.  And we have a lot of experience in this area. In fact, being a strategic reporting platform for Oracle EBS, PeopleSoft, JD Edwards, Siebel, and many other Oracle application products, our customers from all over the world are generating thousands of thousands of reports, including out-of-the-box pre-developed reports from Oracle and customer created or customized reports, in their own local language everyday as they operate and manage their business. Today, I’m going to talk about this very topic, how to translate my reports with BI Publisher 11G. Translation Grows, not the Numbers of the Reports Most of the reporting tools, regardless if it’s traditional or new, always take this translation on the back burner. They require their users to copy an original report and translate the whole thing. So when you want to support additional10 languages you will need to have 10 copies of the original. Imagine when you have 50 reports then you will end up having 500 reports (50 x 10) ! Now you need to maintain these 500 reports, whenever you need to make a change in a report you need to apply the same change to the other 10 reports. And as you imagine this is not only a nightmare for IT managements but not acceptable especially for the applications like Oracle EBS that supports over 30 languages. So first thing we did was, very simple, we separated the translation out of the report and marry it to the report only at the report generation. This means, regardless of how many languages you need to support you need to have only one report and translation files for the 10 languages, which would contain the translated letters and words. So let’s say you have 50 reports and need to support 10 languages for those reports you still have only 50 reports and each report now has 10 language translation files. Yes, translation is the one should grow as you add more languages to support, not the report itself! And second, we provide the translation files in XLIFF format, which is an international standard XML based format to exchange and maintain translation strings. So once you generate the XLIFF files for your reports with BI Publisher then you can work with any translation vendors in the world to make a mass translation or you can translate the XML files by yourself by manually updating the translatable strings presented in this text file. Lastly, we made it easier to manage the translation process starting from generating the XLIFF files to uploading the translated XLIFF files back to the BI Publisher server. You can generate, download, upload the XLIFF files from the BI Publisher’s Web interface with your browser and you can see the translated reports right away without needing to shutdown or restart your server. While the translated reports are displayed based on your language preference setting you can also specify a different language when you schedule or deliver the reports so that they can be generated in your customer’s preferred language. What Can I Translate? When it comes to translation there are three things. First, report content translation. When you receive a report you like to see the content like report title, section title, comments, annotation, table column header, and anything that are static and embedded in the report. in your preferred language. We call this Reports Content translation. Second, when you open a report online you might want to see not only the report content being translated but also the report UI, such as report name, parameter name, layout name, and anything that would help you to navigate around the reports, to be translated in your language. We call this Reports UI translation. And this separation of the Reports Content and Reports UI translation makes it very useful especially when you want to navigate through the reports in your preferred language UI but want to generate the reports in your customer’s preferred language. Imagine you are English native speaker and need to generate and send a report to your customers in China. You like to see the report name, parameter name in English so that you can comfortably navigate to the report and generate the report output, but like to see the report generated in Chinese so that the your customers in China can understand the report when they receive it. And lastly, you might want to see even the data presented in the report to be translated. For example, you might want to see product names in an Order Status report to be translated based on the report viewer’s language preference. We call this Reporting Data translation. Since this Reporting Data translation is maintained at the data source level such as Database tables along with the main data, you need to prepare the translation at the data source level first. Then, you want to make sure that your query is switched accordingly based on the language preference setting so that the translated data will be retrieved. How to Translate BI Publisher Reports? Now when it comes to ‘how to translate BI Publisher reports?’ the main focus here is about the translation for the Report Content and Report UI. And I just created this video to show you how to create and manage the translation with BI Publisher 11G. Please take a look at the clip below.   In today’s business world, customers and suppliers are from all over the world regardless of the size of the company or organization. Supporting multiple languages for your reports is no longer something ‘nice to have’, it’s mandatory. BI Publisher is designed to support multi lingual reports from the beginning without any extra hidden cost of license or configuration like other reporting tools such as Crystal Reports. You can support additional languages translation at any time with the very simple steps shown in the video above. Happy translation! Please share your translation experience with us! 

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Centreon setup (web part) config not writeable

    - by Evgeniy
    I have a problem with Centreon installation. At end of install process, I stopped at 10th step: Writable Centreon Configuration File (centreon.conf.php) Critical: Not Writeable /etc/centreon/ asterisk:asterisk (755) Should be : (755) Generate Centreon configuration file OK /etc/centreon/centreon.conf.php asterisk:asterisk (755) Should be : (755) Generate Centstorage configuration file OK /etc/centreon//conf.pm asterisk:asterisk (755) Should be : (755) But permissions and owner are OK. And I tried chmod -R 777 /etc/centreon Please help to understand what's wrong. UPD: Screenshot: http://gyazo.com/82701d119a5e0d98947ad35406a5b8f3

    Read the article

  • Certificate Authentication

    - by steve.mccall1
    Hi, I am currently working on deploying a website for staff to use remotely and would like to make sure it is secure. I was thinking would it be possible to set up some kind of certificate authentication where I would generate a certificate and install it on their laptop so they could access the website? I don't really want them to generate the certificates themselves though as that could easily go wrong. How easy / possible is this and how do I go about doing it? Thanks, Steve

    Read the article

  • What is the difference between the BIN file generated by ImgBurn and UltraISO

    - by user275517
    I have a CD that I would like to generate a BIN file from (with a CUE file to accompany it). I used ImgBurn and UltraISO to to generate two BIN files. However, I have found out that BIN files generated by these programs are not identical (different file size). So, what is the difference between the BIN file formats and which one should I use to backup CD? The same applies to ISO file generation by these two programs - file size does not match.

    Read the article

  • How to check database has not been unmounted/ marked for overwrite

    - by RPS
    Hi When restoring Exchange with use of VSS API try to catch errors in cases: 1) when restoring database has not been unmounted Exchange 2010 generate error on PreRestore call and write error to Windows Application log -all ok , but for Exchange2007 PreRestore succeded and write error toWindows Application log only 2)when restored database has been unmounted but has not been marked for overwrite Exchange 2007/2010 generate error to Windows Application log but PreRestore call succeded How can I know from application (via VSS API - not from Windows Application log) that error has happend (database has not been unmounted and has not been marked for overwrite. ) Thanks

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >