Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 61/79 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • PHP database selection issue

    - by Citroenfris
    I'm in a bit of a pickle with freshening up my PHP a bit, it's been about 3 years since I last coded in PHP. Any insights are welcomed! I'll give you as much information as I possibly can to resolve this error so here goes! Files config.php database.php news.php BLnews.php index.php Includes config.php - news.php database.php - news.php news.php - BLnews.php BLnews.php - index.php Now the problem with my current code is that the database connection is being made but my database refuses to be selected. The query I have should work but due to my database not getting selected it's kind of annoying to get any data exchange going! database.php <?php class Database { //------------------------------------------- // Connects to the database //------------------------------------------- function connect() { if (isset($dbhost) && isset($dbuser) && isset($dbpass)) { $con = mysql_connect($dbhost, $dbuser, $dbpass) or die("Could not connect: " . mysql_error()); } }// end function connect function selectDB() { if (isset($dbname) && isset($con)) { $selected_db = mysql_select_db($dbname, $con) or die("Could not select test DB"); } } } // end class Database ?> News.php <?php // include the config file and database class include 'config.php'; include 'database.php'; ... ?> BLnews.php <?php // include the news class include 'news.php'; // create an instance of the Database class and call it $db $db = new Database; $db -> connect(); $db->selectDB(); class BLnews { function getNews() { $sql = "SELECT * FROM news"; if (isset($sql)) { $result = mysql_query($sql) or die("Could not execute query. Reason: " .mysql_error()); } return $result; } ?> index.php <?php ... include 'includes/BLnews.php'; $blNews = new BLnews(); $news = $blNews->getNews(); ?> ... <?php while($row = mysql_fetch_array($news)) { echo '<div class="post">'; echo '<h2><a href="#"> ' . $row["title"] .'</a></h2>'; echo '<p class="post-info">Posted by <a href="#"> </a> | <span class="date"> Posted on <a href="#">' . $row["date"] . '</a></span></p>'; echo $row["content"]; echo '</div>'; } ?> Well this is pretty much everything that should get the information going however due to the mysql_error in $result = mysql_query($sql) or die("Could not execute query. Reason: " .mysql_error()); I can see the error and it says: Could not execute query. Reason: No database selected I honestly have no idea why it would not work and I've been fiddling with it for quite some time now. Help is most welcomed and I thank you in advance! Greets Lemon

    Read the article

  • Deploying Data Mining Models using Model Export and Import

    - by [email protected]
    In this post, we'll take a look at how Oracle Data Mining facilitates model deployment. After building and testing models, a next step is often putting your data mining model into a production system -- referred to as model deployment. The ability to move data mining model(s) easily into a production system can greatly speed model deployment, and reduce the overall cost. Since Oracle Data Mining provides models as first class database objects, models can be manipulated using familiar database techniques and technology. For example, one or more models can be exported to a flat file, similar to a database table dump file (.dmp). This file can be moved to a different instance of Oracle Database EE, and then imported. All methods for exporting and importing models are based on Oracle Data Pump technology and found in the DBMS_DATA_MINING package. Before performing the actual export or import, a directory object must be created. A directory object is a logical name in the database for a physical directory on the host computer. Read/write access to a directory object is necessary to access the host computer file system from within Oracle Database. For our example, we'll work in the DMUSER schema. First, DMUSER requires the privilege to create any directory. This is often granted through the sysdba account. grant create any directory to dmuser; Now, DMUSER can create the directory object specifying the path where the exported model file (.dmp) should be placed. In this case, on a linux machine, we have the directory /scratch/oracle. CREATE OR REPLACE DIRECTORY dmdir AS '/scratch/oracle'; If you aren't sure of the exact name of the model or models to export, you can find the list of models using the following query: select model_name from user_mining_models; There are several options when exporting models. We can export a single model, multiple models, or all models in a schema using the following procedure calls: BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODEL.dmp','dmdir','name =''MY_DT_MODEL'''); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('MY_MODELS.dmp','dmdir',              'name IN (''MY_DT_MODEL'',''MY_KM_MODEL'')'); END; BEGIN   DBMS_DATA_MINING.EXPORT_MODEL ('ALL_DMUSER_MODELS.dmp','dmdir'); END; A .dmp file can be imported into another schema or database using the following procedure call, for example: BEGIN   DBMS_DATA_MINING.IMPORT_MODEL('MY_MODELS.dmp', 'dmdir'); END; As with models from any data mining tool, when moving a model from one environment to another, care needs to be taken to ensure the transformations that prepare the data for model building are matched (with appropriate parameters and statistics) in the system where the model is deployed. Oracle Data Mining provides automatic data preparation (ADP) and embedded data preparation (EDP) to reduce, or possibly eliminate, the need to explicitly transport transformations with the model. In the case of ADP, ODM automatically prepares the data and includes the necessary transformations in the model itself. In the case of EDP, users can associate their own transformations with attributes of a model. These transformations are automatically applied when applying the model to data, i.e., scoring. Exporting and importing a model with ADP or EDP results in these transformations being immediately available with the model in the production system.

    Read the article

  • How to check if a cdrom is in the tray remotely (via ssh)?

    - by adempewolff
    I have a server running Ubuntu 10.04 (it's on the other side of the world and I haven't built up the wherewithal to upgrade it remotely yet) and I have been told that there is a CD in one of it's two CD drives. I want to rip an image of the cd and then download it to my local computer (I don't need help with either of these steps). However, I cannot seem to confirm whether or not there actually is a CD in the drive as I was told. It did not automatically mount anywhere (which I'm thinking might just be a result of it being a headless server not running X, nautilus, or any of the other nice user friendly things). There are two CD drives connected via SCSI: austin@austinvpn:/proc/scsi$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD400EB-75CP Rev: 06.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: Lite-On Model: LTN486S 48x Max Rev: YDS6 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SAMSUNG Model: CD-R/RW SW-248F Rev: R602 Type: CD-ROM ANSI SCSI revision: 05 However when I try mounting either of these devices (and every other device that could possibly be the cd-drive), it says no medium found: austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd1 /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd0 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom1 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrw /cdrom mount: no medium found on /dev/sr1 Here are the contents of my /dev folder: austin@austinvpn:/proc/scsi$ ls /dev agpgart loop6 ram6 tty10 tty38 tty8 austinvpn loop7 ram7 tty11 tty39 tty9 block lp0 ram8 tty12 tty4 ttyS0 bsg mapper ram9 tty13 tty40 ttyS1 btrfs-control mcelog random tty14 tty41 ttyS2 bus mem rfkill tty15 tty42 ttyS3 cdrom net root tty16 tty43 urandom cdrom1 network_latency rtc tty17 tty44 usbmon0 cdrw network_throughput rtc0 tty18 tty45 usbmon1 char null scd0 tty19 tty46 usbmon2 console oldmem scd1 tty2 tty47 usbmon3 core parport0 sda tty20 tty48 usbmon4 cpu_dma_latency pktcdvd sda1 tty21 tty49 vcs disk port sda2 tty22 tty5 vcs1 dri ppp sda5 tty23 tty50 vcs2 ecryptfs psaux sg0 tty24 tty51 vcs3 fb0 ptmx sg1 tty25 tty52 vcs4 fd pts sg2 tty26 tty53 vcs5 full ram0 shm tty27 tty54 vcs6 fuse ram1 snapshot tty28 tty55 vcs7 hpet ram10 snd tty29 tty56 vcsa input ram11 sndstat tty3 tty57 vcsa1 kmsg ram12 sr0 tty30 tty58 vcsa2 log ram13 sr1 tty31 tty59 vcsa3 loop0 ram14 stderr tty32 tty6 vcsa4 loop1 ram15 stdin tty33 tty60 vcsa5 loop2 ram2 stdout tty34 tty61 vcsa6 loop3 ram3 tty tty35 tty62 vcsa7 loop4 ram4 tty0 tty36 tty63 vga_arbiter loop5 ram5 tty1 tty37 tty7 zero And here is my fstab file: austin@austinvpn:/proc/scsi$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/austinvpn-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=ed5520ae-c690-4ce6-881e-3598f299be06 /boot ext2 defaults 0 2 /dev/mapper/austinvpn-swap_1 none swap sw 0 0 Am I missing something/doing something wrong, or is there just no CD in the drive or is the drive possibly broken? Is there any nice command to list devices with mountable media? Thanks in advance for any help!

    Read the article

  • JDeveloper 11g R1 (11.1.1.4.0) - New Features on ADF Desktop Integration Explained

    - by juan.ruiz
    One of the areas that introduced many new features on the latest release (11.1.1.4.0)  of JDeveloper 11g R1 is ADF Desktop integration - in this article I’ll provide an overview of these new features. New ADF Desktop Integration Ribbon in Excel - After installing the ADF desktop integration add-in and depending on the mode in which you open the desktop integration workbook, the ADF Desktop integration ribbon for design time and runtime are displayed as a separate tab within Excel. In previous version the ADF Desktop integration environment used to be placed inside the add-ins tab. Above you can see both, design time ribbon as well as runtime ribbon. On the design time ribbon you can manage the workbook and worksheet properties, worksheet component properties, diagnostics, execution and publication of the workbook. The runtime version of the ribbon is totally customizable and represents what it used to be the runtime menu on the spreadsheet, in this ribbon you can include all the operations and actions that could be executed by the end user while working with the spreadsheet data. Diagnostics - A very important aspect for developers is how to debug or verify the interactions of the client with the server, for that ADF desktop integration has provided since day one a series of diagnostics tools. In this release the diagnostics tools are more visible and are really easy to configure. You can access the client console while testing the workbook, or you can simple dump all the messages to a log file – having the ability of setting the output level for both. Security - There are a number of enhancements on security but the one with more impact for developers is tha security now is optional when using ADF Desktop Integration. Until this version every time that you wanted to work with ADFdi it was a must that the application was previously secured. In this release security is optional which means that if you have previously defined security on your application, then you must secure the ADFdi servlet as explained in one of my previous (ADD LINK) posts. In the other hand, if but the time that you start working with ADFdi you have not defined security, you can test and publish your workbooks without adding security. Support for Continuous Integration - In this release we have added tooling for continuous integration building. in the ADF desktop integration space, the concept translates to adding functionality that developers can use to publish ADFdi workbooks as part of their entire application build. For that purpose, we have a publish tool that can be easily invoke from an ANT task such that all the design time workbooks are re-published into the latest version of the application building process. Key Column - At runtime, on any worksheet containing editable tables you will notice a new additional column called the key column. The purpose of this column is to make the end user aware that all rows on the table need to be selected at the time of sorting. The users cannot alter the value of this column. From the developers points of view there are no steps required in order to have the key column included into the worksheets. Installation and Creation of New Workbooks - Both use cases can be executed now directly from JDeveloper. As part of the Tools menu options the developer can install the ADF desktop integration designer. Also, creating new workbooks that previously was done through that convert tool shipped with JDeveloper is now automatic done from the New Gallery. Creating a new ADFdi workbook adds metadata information information to the Excel workbook so you can work in design time. Other Enhancements Support for Excel 2010 and the ADF components ready-only enabled don’t allow to change its value – the cell in Excel is automatically protected, this could cause confusion among customers of previous releases.

    Read the article

  • Drawing transparent glyphs on the HTML canvas

    - by Bertrand Le Roy
    The HTML canvas has a set of methods, createImageData and putImageData, that look like they will enable you to draw transparent shapes pixel by pixel. The data structures that you manipulate with these methods are pseudo-arrays of pixels, with four bytes per pixel. One byte for red, one for green, one for blue and one for alpha. This alpha byte makes one believe that you are going to be able to manage transparency, but that’s a lie. Here is a little script that attempts to overlay a simple generated pattern on top of a uniform background: var wrong = document.getElementById("wrong").getContext("2d"); wrong.fillStyle = "#ffd42a"; wrong.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); wrong.putImageData(overlay, 16, 16); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } where the fill method is setting the pixels in the lower-left half of the overlay to opaque red, and the rest to transparent black. And here’s how it renders: As you can see, the transparency byte was completely ignored. Or was it? in fact, what happens is more subtle. What happens is that the pixels from the image data, including their alpha byte, replaced the existing pixels of the canvas. So the alpha byte is not lost, it’s just that it wasn’t used by putImageData to combine the new pixels with the existing ones. This is in fact a clue to how to write a putImageData that works: we can first dump that image data into an intermediary canvas, and then compose that temporary canvas onto our main canvas. The method that we can use for this composition is drawImage, which works not only with image objects, but also with canvas objects. var right = document.getElementById("right").getContext("2d"); right.fillStyle = "#ffd42a"; right.fillRect(0, 0, 64, 64); var overlay = wrong.createImageData(32, 32), data = overlay.data; fill(data); var overlayCanvas = document.createElement("canvas"); overlayCanvas.width = overlayCanvas.height = 32; overlayCanvas.getContext("2d").putImageData(overlay, 0, 0); right.drawImage(overlayCanvas, 16, 16); And there is is, a version of putImageData that works like it should always have:

    Read the article

  • Oracle OpenWorld 2012: The Best Just Gets Better

    - by kellsey.ruppel
    For almost 30 years, Oracle OpenWorld has been the world's premier learning event for Oracle customers, developers, and partners. With more than 2,000 sessions providing best practices; demos; tips and tricks; and product insight from Oracle, customers, partners, and industry experts, Oracle OpenWorld provides more educational and networking opportunities than any other event in the world. 2011 Facts Attendees from 117 Countries Used Filtered Tap Water to Eliminate 22 Tons of Plastic Bottles Diverted Enough Trash to Fill 37 Dump Trucks 45,000+ Total Registered Attendees Oracle OpenWorld 2012: The Best Just Gets Better What's New? What's Different?  This year Oracle OpenWorld will include the Executive Edge @ OpenWorld (replacing Leaders Circle), the Customer Experience Summit @ OpenWorld, JavaOne, MySQL Connect, and the expanded Oracle PartnerNetwork Exchange @ OpenWorld. More than 50,000 customers and partners will attend OpenWorld to see Oracle's newest hardware and software products at work, and learn more about our server and storage, database, middleware, industry, and applications solutions.  New This Year: The Executive Edge @ Oracle OpenWorld (Oct 1 - 2) New at Oracle OpenWorld this year, the Executive Edge @ OpenWorld (replacing Leaders Circle) will bring together customer, partner and Oracle executives for two days of keynote presentations, summits targeted to customer industries and organizational roles, roundtable discussions, and great new networking opportunities. The Customer Experience Revolution Is Here!Customer Experience Summit @ Oracle OpenWorld (Oct 3 - 5) This dynamic new program offers more than 60 keynotes, roundtables and networking sessions exploring trends, innovations and best practices to help companies succeed with a customer experience-driven business strategy.  All Things Java -- JavaOne (Sep 30 - Oct 4) JavaOne is the world's most important event for the Java developer community. Technical sessions cover topics that span the breadth of the Java universe, with keynotes from the foremost Java visionaries and expert-led hands-on learning opportunities.  Are you innovating with Oracle Fusion Middleware?  If you are, then you need to know that the Call for Nominations for the 2012 Oracle Fusion Middleware Innovation Awards is open now through July 17, 2012. Jointly sponsored by Oracle, AUSOUG, IOUG, OAUG, ODTUG, QUEST, and UKOUG, the Oracle Fusion Middleware Innovation Awards honor organizations creatively using Oracle Fusion Middleware to deliver unique value to their enterprise.  Winning customers and partners will be hosted at Oracle OpenWorld 2012, where they can connect with Oracle executives, network with peers, and be featured in an upcoming edition of Oracle Magazine. Be sure to submit your WebCenter use case today! Oracle Music Festival his year, the first-ever Oracle Music Festival will debut, running from September 30 to October 4. In the tradition of great live music events like Coachella and SXSW, the streets of San Francisco—from 7:00 p.m. to 1:00 a.m. for five nights-into-days—will vibrate with the music of some of today’s hottest name acts, emerging and local bands, and scratching DJs. Outdoor venues and clubs near Moscone Center and the Zone (including 111 Minna, DNA, Mezzanine, Roe, Ruby Skye, Slim’s, the Taylor Street Café, Temple, Union Square, and Yerba Buena Gardens) will showcase acts that range from reggae to rock, punk to ska, R&B to country, indie to honky-tonk. After a full day of sessions and networking, you'll be primed for some late-night relaxation and rocking out at one or more of these sets.  Please note that with awesome acts, thousands of music devotees, and a limited number of venues each night, access to Festival events is on a first-come, first-served basis. Join us at the Oracle Music Festival--it's going to be epic! Save $500 on Registration with Early Bird Pricing Early Bird pricing ends July 13! Save up to $500 on registration fees by registering by Friday. Will you be attending Oracle OpenWorld 2012? We hope to see you there! Be sure to follow @oraclewebcenter on Twitter for more information and use hashtags #webcenter and #oow!

    Read the article

  • Copy New Files Only in .NET

    - by psheriff
    Recently I had a client that had a need to copy files from one folder to another. However, there was a process that was running that would dump new files into the original folder every minute or so. So, we needed to be able to copy over all the files one time, then also be able to go back a little later and grab just the new files. After looking into the System.IO namespace, none of the classes within here met my needs exactly. Of course I could build it out of the various File and Directory classes, but then I remembered back to my old DOS days (yes, I am that old!). The XCopy command in DOS (or the command prompt for you pure Windows people) is very powerful. One of the options you can pass to this command is to grab only newer files when copying from one folder to another. So instead of writing a ton of code I decided to simply call the XCopy command using the Process class in .NET. The command I needed to run at the command prompt looked like this: XCopy C:\Original\*.* D:\Backup\*.* /q /d /y What this command does is to copy all files from the Original folder on the C drive to the Backup folder on the D drive. The /q option says to do it quitely without repeating all the file names as it copies them. The /d option says to get any newer files it finds in the Original folder that are not in the Backup folder, or any files that have a newer date/time stamp. The /y option will automatically overwrite any existing files without prompting the user to press the "Y" key to overwrite the file. To translate this into code that we can call from our .NET programs, you can write the CopyFiles method presented below. C# using System.Diagnostics public void CopyFiles(string source, string destination){  ProcessStartInfo si = new ProcessStartInfo();  string args = @"{0}\*.* {1}\*.* /q /d /y";   args = string.Format(args, source, destination);   si.FileName = "xcopy";  si.Arguments = args;  Process.Start(si);} VB.NET Imports System.Diagnostics Public Sub CopyFiles(source As String, destination As String)  Dim si As New ProcessStartInfo()  Dim args As String = "{0}\*.* {1}\*.* /q /d /y"   args = String.Format(args, source, destination)   si.FileName = "xcopy"  si.Arguments = args  Process.Start(si)End Sub The CopyFiles method first creates a ProcessStartInfo object. This object is where you fill in name of the command you wish to run and also the arguments that you wish to pass to the command. I created a string with the arguments then filled in the source and destination folders using the string.Format() method. Finally you call the Start method of the Process class passing in the ProcessStartInfo object. That's all there is to calling any command in the operating system. Very simple, and much less code than it would have taken had I coded it using the various File and Directory classes. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • NHibernate 3.0 and FluentNHibernate, how to get up and running&hellip;.

    - by DesigningCode
    First up. Its actually really easy. I’m not very religious about my DB tech, I don’t really care, I just want something that works.  So I’m happy to consider all options if they provide an advantage, and recently I was considering jumping from NHibernate to EF 4.0.  However before ditching NHibernate and jumping to EF 4.0 I thought I should try the head version of NHibernates trunk and the Head version of FluentNHibernate. I currently have a “Repository / Unit of Work” Framework built up around these two techs.  All up it makes my life pretty simple for dealing with databases.   The problem is the current release of NHibernate + the Linq provider wasn’t too hot for our purposes.  Especially trying to plug it into older VB.NET code.   The Linq provider spat the dummy with VB.NET lambdas.  Mainly because in C# Query().Where(l => l.Name.Contains("x") || l.Name.Contains("y")).ToList(); is not the same as the VB.NET Query().Where(Function(l) l.Name.Contains("x") Or l.Name.Contains("y")).ToList VB.NET seems to spit out … well…. something different :-) so anyways… Compiling your own version of NHibernate and FluentNHibernate.  It’s actually pretty easy! First you’ll need to install tortisesvn NAnt and Git if you don’t already have them.  NHibernate first step, get the subversion trunk https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/ into a directory somewhere.  eg \thirdparty\nhibernate Then use NAnt to build it.   (if you open the .sln it will show errors in that  AssemblyInfo.cs doesn’t exist ) to build it, there is a .txt document with sample command line build instructions,  I simply used :- NAnt -D:project.config=release clean build >output-release-build.log *wait* *wait* *wait* and ta da, you will have a bin directory with all the release dlls. FluentNHibernate This was pretty simple. there’s instructions here :- http://wiki.fluentnhibernate.org/Getting_started#Installation basically, with git, create a directory, and you issue the command git clone git://github.com/jagregory/fluent-nhibernate.git and wait, and soon enough you have the source. Now, from the bin directory that NHibernate spit out, take everything and dump it into the subdirectory “fluent-nhibernate\tools\NHibernate” Now, to build, you can use rake….which a ruby build system, however you can also just open the solution and build.   Which is what I did.  I had a few problems with the references which I simply re-added using the new ones.  Once built, I just took all the NHibnerate dlls, and the fluent ones and replaced my existing NHibernate / Fluent and killed off the old linq project. All I had to change is the places that used  .Linq<T>  and replace them with .Query<T>  (which was easy as I had wrapped it already to isolate my code from such changes) and hey presto, everything worked.  Even the VB.NET linq calls. I need to do some more testing as I’ve only done basic smoke tests, but its all looking pretty good, so for now, I will stick to NHibernate!

    Read the article

  • ArchBeat Link-o-Rama Top 10 - September 16-22, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of September 16-22, 2012. The Real Architects of LA: OTN Architect Day in Los Angeles - Oct 25No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. OIM-OAM-OAAM integration using TAP – Request Flow you must understand!! | Atul KumarAtul Kumar's post addresses "key points and request flow that you must understand" when integrating three Oracle Identity Management product Oracle Identity Management, Oracle Access Management, and Oracle Adaptive Access Manager. Cloud, automation drive new growth in SOA governance market | ZDNet "SOA governance tools and processes learned over the past decade are now underpinning cloud projects as they scale across enterprises," reports Joe McKendrick. But there remains a lack of understanding about SOA Governance. DevOps Basics: Track Down High CPU Thread with ps, top and the new JDK7 jcmd Tool | Frank Munz "The approach is very generic and works for WebLogic, Glassfish or any other Java application," say Frank Munz. "UNIX commands in the example are run on CentOS, so they will work without changes for Oracle Enterprise Linux or RedHat. Creating the thread dump at the end of the video is done with the jcmd tool from JDK7." Frank has captured the process in the posted video. Oracle OpenWorld 2012 Hands-on Lab: "Leading Your Everyday Application Integration Projects with Enterprise SOA" Yet another session to squeeze into your already-jammed Oracle OpenWorld schedule. This hands-on lab focuses on how "Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects." Loving VirtualBox 4.2… | The ORACLE-BASE Blog Is it wrong for a man to love a technology? Oracle ACE Director Tim Hall has several very good reasons for his feelings… ADF Create and CreateInsert Operations for ADF Table | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis answers the question, "What operation is best to use to insert a new row into an ADF table, Create or CreateInsert?" Fault Handling Slides and Q&A | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen shares the slides and a Q&A transcript from a presentation he and fellow ACE Director Guido Schmutz gave at the recent Oracle OpenWorld and JavaOne preview event organized by AMIS Technology. Why IT is a profession in 'flux' | ZDNet I usuallly don't post two items from the same person in one day, but this post from ZDNet blogger Joe McKendrick deals with some critical issues affecting those in IT. As McKendrick puts it: "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." Running RichFaces on WebLogic 12c | Markus Eisele "With all the JMS magic and the different provider checks in the showcase this has become some kind of a challenge to simply build and deploy it," says Oracle ACE Director Markus Eisele. His detailed post will help you to meet that challenge. Thought for the Day "Less is more." — Ludwig Mies van der Rohe (March 27, 1886 – August 17, 1969) Source: BrainyQuote.com

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Swap not available on System Monitor

    - by Zaki
    I had a swap partition of 1GB (RAM 1GB, Ubuntu 12.04 lts). Now swap is not shown on System Monitor neither can I hibernate my pc (sudo pm-hibernate). blkid output: /dev/sda1: UUID="B8B4FBB1B4FB706C" TYPE="ntfs" /dev/sda2: UUID="2ea7d608-2d89-4e41-9436-d05cb3ce8871" TYPE="swap" /dev/sda3: UUID="3219d03a-67e4-454b-8ce7-a27831846e35" TYPE="ext4" /dev/sda5: LABEL="Softwares" UUID="AC1CC3301CC2F47C" TYPE="ntfs" /dev/sda6: LABEL="Education" UUID="1E103E6C103E4B53" TYPE="ntfs" /dev/sda7: LABEL="Recreation" UUID="2CC8D181C8D149AA" TYPE="ntfs" /dev/sda8: LABEL="Miscellaneous" UUID="0274D6B174D6A727" TYPE="ntfs" /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=3219d03a-67e4-454b-8ce7-a27831846e35 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=2ea7d608-2d89-4e41-9436-d05cb3ce8871 none swap sw 0 0 free -m total used free shared buffers cached Mem: 991 867 123 0 27 418 -/+ buffers/cache: 421 569 Swap: 0 0 0 cat /proc/swaps Filename Type Size Used Priority fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9f369f36 Device Boot Start End Blocks Id System /dev/sda1 * 63 31471334 15735636 7 HPFS/NTFS/exFAT /dev/sda2 31471616 33470447 999416 82 Linux swap / Solaris /dev/sda3 33472512 62539775 14533632 83 Linux /dev/sda4 62541045 312592769 125025862+ f W95 Ext'd (LBA) /dev/sda5 62541108 125066024 31262458+ 7 HPFS/NTFS/exFAT /dev/sda6 125066088 187591004 31262458+ 7 HPFS/NTFS/exFAT /dev/sda7 187591068 250115984 31262458+ 7 HPFS/NTFS/exFAT /dev/sda8 250116048 312576704 31230328+ 7 HPFS/NTFS/exFAT swapon --all swapon: /dev/sda2: swapon failed: Invalid argument dmesg | grep -A 5 -B 5 -i swap [ 9.487404] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131645 [ 9.487413] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131330 [ 9.487418] EXT4-fs (sda3): 16 orphan inodes deleted [ 9.487420] EXT4-fs (sda3): recovery complete [ 9.578600] EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null) [ 20.580539] Swap area shorter than signature indicates [ 20.588363] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 20.619443] udevd[330]: starting version 175 [ 20.649959] lp: driver loaded but no devices found [ 20.662972] [drm] Initialized drm 1.1.0 20060810 [ 20.675515] i915 0000:00:02.0: setting latency timer to 64 -- [ 72.288573] PM: thaw of drv:sr dev:3:0:0:0 complete after 178.143 msecs [ 72.288578] PM: thaw of drv:scsi_device dev:3:0:0:0 complete after 178.136 msecs [ 72.299677] PM: thaw of drv:scsi_device dev:2:0:0:0 complete after 189.270 msecs [ 72.309473] PM: thaw of devices complete after 202.763 msecs [ 72.309668] PM: writing image. [ 72.309670] PM: Cannot find swap device, try swapon -a. [ 72.309699] PM: Cannot get swap writer [ 72.329896] Restarting tasks ... done. [ 72.331777] PM: Basic memory bitmaps freed [ 72.331792] video LNXVIDEO:00: Restoring backlight state [ 72.420048] option1 ttyUSB0: option_instat_callback: error -84 [ 72.804047] option1 ttyUSB0: option_instat_callback: error -84 -- [ 145.960625] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 145.972036] sd 7:0:0:0: [sdb] Attached SCSI removable disk [ 172.430508] PPP BSD Compression module registered [ 172.455583] PPP Deflate Compression module registered [ 332.260789] type=1400 audit(1381814763.342:27): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 1913.030998] Swap area shorter than signature indicates [ 2022.530155] type=1400 audit(1381816453.610:28): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 4062.729509] Swap area shorter than signature indicates Please help. Thanks in advance. df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 14G 6.1G 7.0G 47% / udev 488M 4.0K 488M 1% /dev tmpfs 199M 868K 198M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 496M 224K 496M 1% /run/shm

    Read the article

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Designing status management for a file processing module

    - by bot
    The background One of the functionality of a product that I am currently working on is to process a set of compressed files ( containing XML files ) that will be made available at a fixed location periodically (local or remote location - doesn't really matter for now) and dump the contents of each XML file in a database. I have taken care of the design for a generic parsing module that should be able to accommodate the parsing of any file type as I have explained in my question linked below. There is no need to take a look at the following link to answer my question but it would definitely provide a better context to the problem Generic file parser design in Java using the Strategy pattern The Goal I want to be able to keep a track of the status of each XML file and the status of each compressed file containing the XML files. I can probably have different statuses defined for the XML files such as NEW, PROCESSING, LOADING, COMPLETE or FAILED. I can derive the status of a compressed file based on the status of the XML files within the compressed file. e.g status of the compressed file is COMPLETE if no XML file inside the compressed file is in a FAILED state or status of the compressed file is FAILED if the status of at-least one XML file inside the compressed file is FAILED. A possible solution The Model I need to maintain the status of each XML file and the compressed file. I will have to define some POJOs for holding the information about an XML file as shown below. Note that there is no need to store the status of a compressed file as the status of a compressed file can be derived from the status of its XML files. public class FileInformation { private String compressedFileName; private String xmlFileName; private long lastModifiedDate; private int status; public FileInformation(final String compressedFileName, final String xmlFileName, final long lastModified, final int status) { this.compressedFileName = compressedFileName; this.xmlFileName = xmlFileName; this.lastModifiedDate = lastModified; this.status = status; } } I can then have a class called StatusManager that aggregates a Map of FileInformation instances and provides me the status of a given file at any given time in the lifetime of the appliciation as shown below : public class StatusManager { private Map<String,FileInformation> processingMap = new HashMap<String,FileInformation>(); public void add(FileInformation fileInformation) { fileInformation.setStatus(0); // 0 will indicates that the file is in NEW state. 1 will indicate that the file is in process and so on.. processingMap.put(fileInformation.getXmlFileName(),fileInformation); } public void update(String filename,int status) { FileInformation fileInformation = processingMap.get(filename); fileInformation.setStatus(status); } } That takes care of the model for the sake of explanation. So whats my question? Edited after comments from Loki and answer from Eric : - I would like to know if there are any existing design patterns that I can refer to while coming up with a design. I would also like to know how I should go about designing the status management classes. I am more interested in understanding how I can model the status management classes. I am not interested in how other components are going to be updated about a change in status at the moment as suggested by Eric.

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • When does a Tumbling Window Start in StreamInsight

    Whilst getting some courseware ready I was playing around writing some code and I decided to very simply show when a window starts and ends based on you asking for a TumblingWindow of n time units in StreamInsight.  I thought this was going to be a two second thing but what I found was something I haven’t yet found documented anywhere until now.   All this code is written in C# and will slot straight into my favourite quick-win dev tool LinqPad   Let’s first create a sample dataset   var EnumerableCollection = new [] { new {id = 1, StartTime = DateTime.Parse("2010-10-01 12:00:00 PM").ToLocalTime()}, new {id = 2, StartTime = DateTime.Parse("2010-10-01 12:20:00 PM").ToLocalTime()}, new {id = 3, StartTime = DateTime.Parse("2010-10-01 12:30:00 PM").ToLocalTime()}, new {id = 4, StartTime = DateTime.Parse("2010-10-01 12:40:00 PM").ToLocalTime()}, new {id = 5, StartTime = DateTime.Parse("2010-10-01 12:50:00 PM").ToLocalTime()}, new {id = 6, StartTime = DateTime.Parse("2010-10-01 01:00:00 PM").ToLocalTime()}, new {id = 7, StartTime = DateTime.Parse("2010-10-01 01:10:00 PM").ToLocalTime()}, new {id = 8, StartTime = DateTime.Parse("2010-10-01 02:00:00 PM").ToLocalTime()}, new {id = 9, StartTime = DateTime.Parse("2010-10-01 03:20:00 PM").ToLocalTime()}, new {id = 10, StartTime = DateTime.Parse("2010-10-01 03:30:00 PM").ToLocalTime()}, new {id = 11, StartTime = DateTime.Parse("2010-10-01 04:40:00 PM").ToLocalTime()}, new {id = 12, StartTime = DateTime.Parse("2010-10-01 04:50:00 PM").ToLocalTime()}, new {id = 13, StartTime = DateTime.Parse("2010-10-01 05:00:00 PM").ToLocalTime()}, new {id = 14, StartTime = DateTime.Parse("2010-10-01 05:10:00 PM").ToLocalTime()} };   Now let’s create a stream of point events   var inputStream = EnumerableCollection .ToPointStream(Application,evt=> PointEvent .CreateInsert(evt.StartTime,evt),AdvanceTimeSettings.StrictlyIncreasingStartTime);   Now we can create our windows over the stream.  The first window we will create is a one hour tumbling window.  We’'ll count the events in the window but what we do here is not the point, the point is our window edges.   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Now we can have a look at what we get.  I am only going to show the first non Cti event as that is enough to demonstrate what is going on   windowedStream.ToIntervalEnumerable().First(e=> e.EventKind == EventKind.Insert).Dump("First Row from Windowed Stream");   The results are below   EventKind Insert   StartTime 01/10/2010 12:00   EndTime 01/10/2010 13:00     { CountOfEntries = 5 }   Payload CountOfEntries 5   Now this makes sense and is quite often the width of window specified in examples.  So what happens if I change the windowing code now to var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(5),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()}; Now where does your window start?  What about   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromMinutes(13),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Well for the first example your window will start at 01/10/2010 10:00:00 , and for the second example it will start at  01/10/2010 11:55:00 Surprised?   Here is the reason why and thanks to the StreamInsight team for listening.   Windows start at TimeSpan.MinValue. Windows are then created from that point onwards of the size you specified in your code.  If a window contains no events they are not produced by the engine to the output.  This is why window start times can be before the first event is created.

    Read the article

  • Not enough free disk space

    - by carmatt95
    I'm new to Ubuntu and I'm getting an error in software updater. When I try and do my daily updates, it says: The upgrade needs a total of 25.3 M free space on disk /boot. Please free at least an additional 25.3 M of disk space on /boot. Empty your trash and remove temporary packages of former installations using sudo apt-get clean. I tried typing in sudo apt-get clean into the terminal but I still get the message. All of the pages I read seem to be for experianced Ubuntuers. Any help would be appreciated. I'm running Ubuntu 12.10. I want to upgrade to 13.04 but understand I have to finish these first. EDIT: @Alaa, This is the output from typing in cat /etc/fstab into the terminal: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=fa55c082-112d-4b10-bcf3-e7ffec6cebbc /boot ext2 defaults 0 2 /dev/mapper/ubuntu-swap_1 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 matty@matty-G41M-ES2L:~$ df -h: Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu-root 915G 27G 842G 4% / udev 984M 4.0K 984M 1% /dev tmpfs 397M 1.1M 396M 1% /run none 5.0M 0 5.0M 0% /run/lock none 992M 1.8M 990M 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sda1 228M 222M 0 100% /boot matty@matty-G41M-ES2L:~$ dpkg -l | grep linux-image: ii linux-image-3.5.0-17-generic 3.5.0-17.28 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-18-generic 3.5.0-18.29 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-19-generic 3.5.0-19.30 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-21-generic 3.5.0-21.32 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-22-generic 3.5.0-22.34 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-23-generic 3.5.0-23.35 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-24-generic 3.5.0-24.37 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-25-generic 3.5.0-25.39 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-26-generic 3.5.0-26.42 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP iF linux-image-3.5.0-28-generic 3.5.0-28.48 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • NServiceBus and NHibernate - Message Handler and Transactions

    - by mattcodes
    From my understanding NServiceBus executes the Handle method of an IMessageHandler within a transaction, if an exception propagates out of this method, then NServiceBus will ensure the message is put back on the message queue (up X amount of times before error queue) etc.. so we have an atomic operation so to speak. Now when if I inside my NServiceBus Message Handle method I do something like this using(var trans = session.BeginTransaction()) { person.Age = 10; session.Update<Person>(person); trans.Commit() } using(var trans2 = session.BeginTransaction()) { person.Age = 20; session.Update<Person>(person); // throw new ApplicationException("Oh no"); trans2.Commit() } What is the effect of this on the transaction scope? Is trans1 now counted as a nested transaction in terms of its relationship with the Nservicebus transaction even though we have done nothing to marry them up? (if not how would one link onto the transaction of NServiceBus? Looking at the second block (trans2), if I uncomment the throw statement, will the NServiceBus transaction then rollback trans1 as well? In basic scenarios, say I dump the above into a console app, then trans1 is independent, commit, flushed and won't rollback. I'm trying to clarify what happens now we sit in someone else's transaction like NServiceBus? The above is just example code, im wouldnt be working directly with session, more like through a uow pattern.

    Read the article

  • cPickle ImportError: No module named multiarray

    - by Rafal
    Hello, I'm using cPickle to save my Database into file. The code looks like that: def Save_DataBase(): import cPickle from scipy import * from numpy import * a=Results.VersionName #filename='D:/results/'+a[a.find('/')+1:-a.find('/')-2]+Results.AssType[:3]+str(random.randint(0,100))+Results.Distribution+".lft" filename='D:/results/pppp.lft' plik=open(filename,'w') DataOutput=[[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]] cPickle.dump(DataOutput, plik, protocol=0) plik.close()` And it works fine. Most of my Database rows are lists of a lists, vecor-like, or array-like data sets. But now when I input data, an error occurs: def Load_DataBase(): import cPickle from scipy import * from numpy import * filename='D:/results/pppp.lft' plik= open(filename, 'rb') """ first cPickle load approach """ A= cPickle.load(plik) """ fail """ """ Another approach - data format exact as in Output step above , also fails""" [[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]]= cPickle.load(plik)` Error is (in both cases): A= cPickle.load(plik) ImportError: No module named multiarray Any Ideas? PS.

    Read the article

  • Wicket, Spring and Hibernate - Testing with Unitils - Error: Table not found in statement [select re

    - by John
    Hi there. I've been following a tutorial and a sample application, namely 5 Days of Wicket - Writing the tests: http://www.mysticcoders.com/blog/2009/03/10/5-days-of-wicket-writing-the-tests/ I've set up my own little project with a simple shoutbox that saves messages to a database. I then wanted to set up a couple of tests that would make sure that if a message is stored in the database, the retrieved object would contain the exact same data. Upon running mvn test all my tests fail. The exception has been pasted in the first code box underneath. I've noticed that even though my unitils.properties says to use the 'hdqldb'-dialect, this message is still output in the console window when starting the tests: INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect. I've added the entire dump from the console as well at the bottom of this post (which goes on for miles and miles :-)). Upon running mvn test all my tests fail, and the exception is: Caused by: java.sql.SQLException: Table not found in statement [select relname from pg_class] at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.fetchResult(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.executeQuery(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:188) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.initSequences(DatabaseMetadata.java:151) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:69) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:62) at org.springframework.orm.hibernate3.LocalSessionFactoryBean$3.doInHibernate(LocalSessionFactoryBean.java:958) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:419) ... 49 more I've set up my unitils.properties file like so: database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:PUBLIC database.userName=sa database.password= database.dialect=hsqldb database.schemaNames=PUBLIC My abstract IntegrationTest class: @SpringApplicationContext({"/com/upbeat/shoutbox/spring/applicationContext.xml", "applicationContext-test.xml"}) public abstract class AbstractIntegrationTest extends UnitilsJUnit4 { private ApplicationContext applicationContext; } applicationContext-test.xml: <?xml version="1.0" encoding="UTF-8"? <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd" <bean id="dataSource" class="org.unitils.database.UnitilsDataSourceFactoryBean"/ </beans and finally, one of the test classes: package com.upbeat.shoutbox.web; import org.apache.wicket.spring.injection.annot.test.AnnotApplicationContextMock; import org.apache.wicket.util.tester.WicketTester; import org.junit.Before; import org.junit.Test; import org.unitils.spring.annotation.SpringBeanByType; import com.upbeat.shoutbox.HomePage; import com.upbeat.shoutbox.integrations.AbstractIntegrationTest; import com.upbeat.shoutbox.persistence.ShoutItemDao; import com.upbeat.shoutbox.services.ShoutService; public class TestHomePage extends AbstractIntegrationTest { @SpringBeanByType private ShoutService svc; @SpringBeanByType private ShoutItemDao dao; protected WicketTester tester; @Before public void setUp() { AnnotApplicationContextMock appctx = new AnnotApplicationContextMock(); appctx.putBean("shoutItemDao", dao); appctx.putBean("shoutService", svc); tester = new WicketTester(); } @Test public void testRenderMyPage() { //start and render the test page tester.startPage(HomePage.class); //assert rendered page class tester.assertRenderedPage(HomePage.class); //assert rendered label component tester.assertLabel("message", "If you see this message wicket is properly configured and running"); } } Dump from console when running mvn test: [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building shoutbox [INFO] task-segment: [test] [INFO] ------------------------------------------------------------------------ [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 3 resources [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [resources:testResources {execution: default-testResources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 2 resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] Nothing to compile - all classes are up to date [INFO] [surefire:test {execution: default-test}] [INFO] Surefire report directory: F:\Projects\shoutbox\target\surefire-reports INFO - ConfigurationLoader - Loaded main configuration file unitils-default.properties from classpath. INFO - ConfigurationLoader - Loaded custom configuration file unitils.properties from classpath. INFO - ConfigurationLoader - No local configuration file unitils-local.properties found. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.upbeat.shoutbox.web.TestViewShoutsPage Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.02 sec INFO - Version - Hibernate Annotations 3.4.0.GA INFO - Environment - Hibernate 3.3.0.SP1 INFO - Environment - hibernate.properties not found INFO - Environment - Bytecode provider name : javassist INFO - Environment - using JDK 1.4 java.sql.Timestamp handling INFO - Version - Hibernate Commons Annotations 3.1.0.GA INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml] INFO - SQLErrorCodesFactory - SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase] INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3e0ebb: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - sPathXmlApplicationContext - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@a8e586: display name [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]; startup date [Tue May 04 18:19:58 CEST 2010]; root of context hierarchy INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [com/upbeat/shoutbox/spring/applicationContext.xml] INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [applicationContext-test.xml] INFO - DefaultListableBeanFactory - Overriding bean definition for bean 'dataSource': replacing [Generic bean: class [org.apache.commons.dbcp.BasicDataSource]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=close; defined in class path resource [com/upbeat/shoutbox/spring/applicationContext.xml]] with [Generic bean: class [org.unitils.database.UnitilsDataSourceFactoryBean]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [applicationContext-test.xml]] INFO - sPathXmlApplicationContext - Bean factory for application context [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]: org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1 INFO - pertyPlaceholderConfigurer - Loading properties file from class path resource [application.properties] INFO - DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.34 sec <<< FAILURE! Running com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec <<< FAILURE! Running com.upbeat.shoutbox.mocks.ShoutServiceTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.01 sec <<< FAILURE! Results : Tests in error: initializationError(com.upbeat.shoutbox.web.TestViewShoutsPage) testRenderMyPage(com.upbeat.shoutbox.web.TestHomePage) initializationError(com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest) initializationError(com.upbeat.shoutbox.mocks.ShoutServiceTest) Tests run: 4, Failures: 0, Errors: 4, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to F:\Projects\shoutbox\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Tue May 04 18:19:58 CEST 2010 [INFO] Final Memory: 13M/31M [INFO] ------------------------------------------------------------------------ Any help is greatly appreciated.

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • Streaming to the Android MediaPlayer

    - by Rob Szumlakowski
    Hi. I'm trying to write a light-weight HTTP server in my app to feed dynamically generated MP3 data to the built-in Android MediaPlayer. I am not permitted to store my content on the SD card. My input data is essentially of an infinite length. I tell MediaPlayer that its data source should basically be something like "http://localhost/myfile.mp3". I've a simple server set up that waits for MediaPlayer to make this request. However, MediaPlayer isn't very cooperative. At first, it makes an HTTP GET and tries to grab the whole file. It times out if we try and simply dump data into the socket so we tried using the HTTP Range header to write data in chunks. MediaPlayer doesn't like this and doesn't keep requesting the subsequent chunks. Has anyone had any success streaming data directly into MediaPlayer? Do I need to implement an RTSP or Shoutcast server instead? Am I simply missing a critical HTTP header? What strategy should I use here? Rob Szumlakowski

    Read the article

  • How do I fix: InvalidOperationException upon Session timeout in Ajax WebService call

    - by Ngm
    Hi All, We are invoking Asp.Net ajax web service from the client side. So the JavaScript functions have calls like: // The function to alter the server side state object and set the selected node for the case tree. function JSMethod(caseId, url) { Sample.XYZ.Method(param1, param2, OnMethodReturn); } function OnMethodReturn(result) { var sessionExpiry = CheckForSessionExpiry(result); var error = CheckForErrors(result); ... process result } And on the server side in the ".asmx.cs" file: namespace Sample [ScriptService] class XYZ : WebService { [WebMethod(EnableSession = true)] public string Method(string param1, string param2) { if (SessionExpired()) { return sessionExpiredMessage; } . . . } } The website is setup to use form based authentication. Now if the session has expired and then the JavaScript function "JSMethod" is invoked, then the following error is obtained: Microsoft JScript runtime error: Sys.Net.WebServiceFailedException: The server method 'Method' failed with the following error: System.InvalidOperationException-- Authentication failed. This exception is raised by method "function Sys$Net$WebServiceProxy$invoke" in file "ScriptResource.axd": function Sys$Net$WebServiceProxy$invoke { . . . { // In debug mode, if no error was registered, display some trace information var error; if (result && errorObj) { // If we got a result, we're likely dealing with an error in the method itself error = result.get_exceptionType() + "-- " + result.get_message(); } else { // Otherwise, it's probably a 'top-level' error, in which case we dump the // whole response in the trace error = response.get_responseData(); } // DevDiv 89485: throw, not alert() throw Sys.Net.WebServiceProxy._createFailedError(methodName, String.format(Sys.Res.webServiceFailed, methodName, error)); } So the problem is that the exception is raised even before "Method" is invoked, the exception occurs during the creation of the Web Proxy. Any ideas on how to resolve this problem

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >