Search Results

Search found 2768 results on 111 pages for 'heap dump'.

Page 90/111 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • NServiceBus and NHibernate - Message Handler and Transactions

    - by mattcodes
    From my understanding NServiceBus executes the Handle method of an IMessageHandler within a transaction, if an exception propagates out of this method, then NServiceBus will ensure the message is put back on the message queue (up X amount of times before error queue) etc.. so we have an atomic operation so to speak. Now when if I inside my NServiceBus Message Handle method I do something like this using(var trans = session.BeginTransaction()) { person.Age = 10; session.Update<Person>(person); trans.Commit() } using(var trans2 = session.BeginTransaction()) { person.Age = 20; session.Update<Person>(person); // throw new ApplicationException("Oh no"); trans2.Commit() } What is the effect of this on the transaction scope? Is trans1 now counted as a nested transaction in terms of its relationship with the Nservicebus transaction even though we have done nothing to marry them up? (if not how would one link onto the transaction of NServiceBus? Looking at the second block (trans2), if I uncomment the throw statement, will the NServiceBus transaction then rollback trans1 as well? In basic scenarios, say I dump the above into a console app, then trans1 is independent, commit, flushed and won't rollback. I'm trying to clarify what happens now we sit in someone else's transaction like NServiceBus? The above is just example code, im wouldnt be working directly with session, more like through a uow pattern.

    Read the article

  • cPickle ImportError: No module named multiarray

    - by Rafal
    Hello, I'm using cPickle to save my Database into file. The code looks like that: def Save_DataBase(): import cPickle from scipy import * from numpy import * a=Results.VersionName #filename='D:/results/'+a[a.find('/')+1:-a.find('/')-2]+Results.AssType[:3]+str(random.randint(0,100))+Results.Distribution+".lft" filename='D:/results/pppp.lft' plik=open(filename,'w') DataOutput=[[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]] cPickle.dump(DataOutput, plik, protocol=0) plik.close()` And it works fine. Most of my Database rows are lists of a lists, vecor-like, or array-like data sets. But now when I input data, an error occurs: def Load_DataBase(): import cPickle from scipy import * from numpy import * filename='D:/results/pppp.lft' plik= open(filename, 'rb') """ first cPickle load approach """ A= cPickle.load(plik) """ fail """ """ Another approach - data format exact as in Output step above , also fails""" [[[DataBase.Arrays.Nodes,DataBase.Arrays.Links,DataBase.Arrays.Turns,DataBase.Arrays.Connectors,DataBase.Arrays.Zones], [DataBase.Nodes.Data,DataBase.Links.Data,DataBase.Turns.Data,DataBase.OrigConnectors.Data,DataBase.DestConnectors.Data,DataBase.Zones.Data], [DataBase.Nodes.DictionaryPy2Vis,DataBase.Links.DictionaryPy2Vis,DataBase.Turns.DictionaryPy2Vis,DataBase.OrigConnectors.DictionaryPy2Vis,DataBase.DestConnectors.DictionaryPy2Vis,DataBase.Zones.DictionaryPy2Vis], [DataBase.Nodes.DictionaryVis2Py,DataBase.Links.DictionaryVis2Py,DataBase.Turns.DictionaryVis2Py,DataBase.OrigConnectors.DictionaryVis2Py,DataBase.DestConnectors.DictionaryVis2Py,DataBase.Zones.DictionaryVis2Py], [DataBase.Paths.List]],[Results.VersionName,Results.noZones,Results.noNodes,Results.noLinks,Results.noTurns,Results.noTrips, Results.Times.VersionLoad,Results.Times.GetData,Results.Times.GetCoords,Results.Times.CrossTheTime,Results.Times.Plot_Cylinder, Results.AssType,Results.AssParam,Results.tStart,Results.tEnd,Results.Distribution,Results.tVector]]= cPickle.load(plik)` Error is (in both cases): A= cPickle.load(plik) ImportError: No module named multiarray Any Ideas? PS.

    Read the article

  • Wicket, Spring and Hibernate - Testing with Unitils - Error: Table not found in statement [select re

    - by John
    Hi there. I've been following a tutorial and a sample application, namely 5 Days of Wicket - Writing the tests: http://www.mysticcoders.com/blog/2009/03/10/5-days-of-wicket-writing-the-tests/ I've set up my own little project with a simple shoutbox that saves messages to a database. I then wanted to set up a couple of tests that would make sure that if a message is stored in the database, the retrieved object would contain the exact same data. Upon running mvn test all my tests fail. The exception has been pasted in the first code box underneath. I've noticed that even though my unitils.properties says to use the 'hdqldb'-dialect, this message is still output in the console window when starting the tests: INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect. I've added the entire dump from the console as well at the bottom of this post (which goes on for miles and miles :-)). Upon running mvn test all my tests fail, and the exception is: Caused by: java.sql.SQLException: Table not found in statement [select relname from pg_class] at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.fetchResult(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.executeQuery(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:188) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.initSequences(DatabaseMetadata.java:151) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:69) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:62) at org.springframework.orm.hibernate3.LocalSessionFactoryBean$3.doInHibernate(LocalSessionFactoryBean.java:958) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:419) ... 49 more I've set up my unitils.properties file like so: database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:PUBLIC database.userName=sa database.password= database.dialect=hsqldb database.schemaNames=PUBLIC My abstract IntegrationTest class: @SpringApplicationContext({"/com/upbeat/shoutbox/spring/applicationContext.xml", "applicationContext-test.xml"}) public abstract class AbstractIntegrationTest extends UnitilsJUnit4 { private ApplicationContext applicationContext; } applicationContext-test.xml: <?xml version="1.0" encoding="UTF-8"? <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd" <bean id="dataSource" class="org.unitils.database.UnitilsDataSourceFactoryBean"/ </beans and finally, one of the test classes: package com.upbeat.shoutbox.web; import org.apache.wicket.spring.injection.annot.test.AnnotApplicationContextMock; import org.apache.wicket.util.tester.WicketTester; import org.junit.Before; import org.junit.Test; import org.unitils.spring.annotation.SpringBeanByType; import com.upbeat.shoutbox.HomePage; import com.upbeat.shoutbox.integrations.AbstractIntegrationTest; import com.upbeat.shoutbox.persistence.ShoutItemDao; import com.upbeat.shoutbox.services.ShoutService; public class TestHomePage extends AbstractIntegrationTest { @SpringBeanByType private ShoutService svc; @SpringBeanByType private ShoutItemDao dao; protected WicketTester tester; @Before public void setUp() { AnnotApplicationContextMock appctx = new AnnotApplicationContextMock(); appctx.putBean("shoutItemDao", dao); appctx.putBean("shoutService", svc); tester = new WicketTester(); } @Test public void testRenderMyPage() { //start and render the test page tester.startPage(HomePage.class); //assert rendered page class tester.assertRenderedPage(HomePage.class); //assert rendered label component tester.assertLabel("message", "If you see this message wicket is properly configured and running"); } } Dump from console when running mvn test: [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building shoutbox [INFO] task-segment: [test] [INFO] ------------------------------------------------------------------------ [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 3 resources [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [resources:testResources {execution: default-testResources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 2 resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] Nothing to compile - all classes are up to date [INFO] [surefire:test {execution: default-test}] [INFO] Surefire report directory: F:\Projects\shoutbox\target\surefire-reports INFO - ConfigurationLoader - Loaded main configuration file unitils-default.properties from classpath. INFO - ConfigurationLoader - Loaded custom configuration file unitils.properties from classpath. INFO - ConfigurationLoader - No local configuration file unitils-local.properties found. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.upbeat.shoutbox.web.TestViewShoutsPage Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.02 sec INFO - Version - Hibernate Annotations 3.4.0.GA INFO - Environment - Hibernate 3.3.0.SP1 INFO - Environment - hibernate.properties not found INFO - Environment - Bytecode provider name : javassist INFO - Environment - using JDK 1.4 java.sql.Timestamp handling INFO - Version - Hibernate Commons Annotations 3.1.0.GA INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml] INFO - SQLErrorCodesFactory - SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase] INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3e0ebb: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - sPathXmlApplicationContext - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@a8e586: display name [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]; startup date [Tue May 04 18:19:58 CEST 2010]; root of context hierarchy INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [com/upbeat/shoutbox/spring/applicationContext.xml] INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [applicationContext-test.xml] INFO - DefaultListableBeanFactory - Overriding bean definition for bean 'dataSource': replacing [Generic bean: class [org.apache.commons.dbcp.BasicDataSource]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=close; defined in class path resource [com/upbeat/shoutbox/spring/applicationContext.xml]] with [Generic bean: class [org.unitils.database.UnitilsDataSourceFactoryBean]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [applicationContext-test.xml]] INFO - sPathXmlApplicationContext - Bean factory for application context [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]: org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1 INFO - pertyPlaceholderConfigurer - Loading properties file from class path resource [application.properties] INFO - DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.34 sec <<< FAILURE! Running com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec <<< FAILURE! Running com.upbeat.shoutbox.mocks.ShoutServiceTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.01 sec <<< FAILURE! Results : Tests in error: initializationError(com.upbeat.shoutbox.web.TestViewShoutsPage) testRenderMyPage(com.upbeat.shoutbox.web.TestHomePage) initializationError(com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest) initializationError(com.upbeat.shoutbox.mocks.ShoutServiceTest) Tests run: 4, Failures: 0, Errors: 4, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to F:\Projects\shoutbox\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Tue May 04 18:19:58 CEST 2010 [INFO] Final Memory: 13M/31M [INFO] ------------------------------------------------------------------------ Any help is greatly appreciated.

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • Streaming to the Android MediaPlayer

    - by Rob Szumlakowski
    Hi. I'm trying to write a light-weight HTTP server in my app to feed dynamically generated MP3 data to the built-in Android MediaPlayer. I am not permitted to store my content on the SD card. My input data is essentially of an infinite length. I tell MediaPlayer that its data source should basically be something like "http://localhost/myfile.mp3". I've a simple server set up that waits for MediaPlayer to make this request. However, MediaPlayer isn't very cooperative. At first, it makes an HTTP GET and tries to grab the whole file. It times out if we try and simply dump data into the socket so we tried using the HTTP Range header to write data in chunks. MediaPlayer doesn't like this and doesn't keep requesting the subsequent chunks. Has anyone had any success streaming data directly into MediaPlayer? Do I need to implement an RTSP or Shoutcast server instead? Am I simply missing a critical HTTP header? What strategy should I use here? Rob Szumlakowski

    Read the article

  • Help catching AV with WinDbg and ADPlus 7.0

    - by Stoune
    I want to catch Memory Access Violation in SQL Server Compact Edition like this described at http://debuggingblog.com/wp/2009/02/18/memory-access-violation-in-sql-server-compact-editionce/ The suggested config is: CRASH Quiet MyApp.exe NoDumpOnFirstChance clr;av FullDump gn I download latest Debugging Tools and observe what Microsoft rewrite adplus tool into managed code and change syntax of config File. I rewrite config file like this: <ADPlus Version="2"> <Settings> <RunMode>Crash</RunMode> <Option>Quiet</Option> <Option>NoDumpOnFirst</Option> <Sympath>c:\symbols\</Sympath> <OutputDir>c:\work\output\</OutputDir> <ProcessName>c:\work\app\output\MyApp.exe</ProcessName> </Settings> <Exceptions><!--to get the full dump on clr access violation--> <Exception Code="clr;av"> <Actions1>FullDump</Actions1> <ReturnAction1>gn</ReturnAction1> </Exception> </Exceptions> </ADPlus> And I get error "Couldn't find exception with code: clr;av". If I understand right It didn't load sos extension, but I can't find the right section and syntax that I should use to load it. adplus_old.vbs - for some reasons didn't launch process on Windows 7. WinDBG 6.12.0002.633 X86 ADPlus Engine Version: 7.01.002 02/27/2009 Maybe someone has a working example of config of debugging .NET app with latest adplus.exe?

    Read the article

  • How do I fix: InvalidOperationException upon Session timeout in Ajax WebService call

    - by Ngm
    Hi All, We are invoking Asp.Net ajax web service from the client side. So the JavaScript functions have calls like: // The function to alter the server side state object and set the selected node for the case tree. function JSMethod(caseId, url) { Sample.XYZ.Method(param1, param2, OnMethodReturn); } function OnMethodReturn(result) { var sessionExpiry = CheckForSessionExpiry(result); var error = CheckForErrors(result); ... process result } And on the server side in the ".asmx.cs" file: namespace Sample [ScriptService] class XYZ : WebService { [WebMethod(EnableSession = true)] public string Method(string param1, string param2) { if (SessionExpired()) { return sessionExpiredMessage; } . . . } } The website is setup to use form based authentication. Now if the session has expired and then the JavaScript function "JSMethod" is invoked, then the following error is obtained: Microsoft JScript runtime error: Sys.Net.WebServiceFailedException: The server method 'Method' failed with the following error: System.InvalidOperationException-- Authentication failed. This exception is raised by method "function Sys$Net$WebServiceProxy$invoke" in file "ScriptResource.axd": function Sys$Net$WebServiceProxy$invoke { . . . { // In debug mode, if no error was registered, display some trace information var error; if (result && errorObj) { // If we got a result, we're likely dealing with an error in the method itself error = result.get_exceptionType() + "-- " + result.get_message(); } else { // Otherwise, it's probably a 'top-level' error, in which case we dump the // whole response in the trace error = response.get_responseData(); } // DevDiv 89485: throw, not alert() throw Sys.Net.WebServiceProxy._createFailedError(methodName, String.format(Sys.Res.webServiceFailed, methodName, error)); } So the problem is that the exception is raised even before "Method" is invoked, the exception occurs during the creation of the Web Proxy. Any ideas on how to resolve this problem

    Read the article

  • GWT Best Practices - MVP

    - by GWTNewbie
    A question for all the GWT gurus out there. I'm a newbie in GWT and am trying to understand the best practices of coding a GWT application. I have gone through "Large scale application development and MVP" based on Ray Ryan's talk at Google I/O 2009 and it has given me a good starting point. I downloaded the sample source code as well for the Contacts application based on the best practices listed. The application I'm trying to develop using GWT is a bit bigger (in terms of the modules involved) when compared to the sample "Contacts" application & so I want to split it up into multiple functions. I have been reading that having a single Entry point in a GWT application is a good idea, and I don't want to dump all the code in one single AppController class & one single RpcService, what would be the best approach in this situation? How would I go about dispatching the control to multiple controllers? Is there a way to achieve this using some classes in the GWT framework?

    Read the article

  • Why can't Perl's DBD::DB2 find dbivport.h during installation?

    - by Liju Mathew
    We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error: ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How do we fix this? Do we need root access to resolve this?

    Read the article

  • WCF net.tcp bindings, message formats and security questions

    - by RemotecUk
    Hi, sorry for the stupid questions but there are just some things about WCF I cant get my head around. Would be greatful for some advice on the following.... At a very basic level is it correct that WCF uses either Binary (Net.Tcp), HTTP or MSMQ to transfer my message on the wire? However is it true that in all cases, regardless of how the data is transferred the message itself in in the SOAP format with headers and a body? So its a sort of XML message that is transmitted in either HTTP/S or in a binary format. Is Net.Tcp a good choice for my client server app - its similar to a messenger app in that the clients are all remote users on the other side of the firewall to my server. Most things I am reading are telling to use WS* and HTTP. Is Net.Tcp secured by standard and without certificates? - that is - people cannot listen on the wire and decode the data thats going to and from. Is it possible to send a username and password using net.tcp and without an installed certificate? If so I presume I can hook this up to my membership provider and authenticate access to each method on my service contract implementation. I presume that with username and password security, the proxy is initialised with the username and password and that this information is is sent with every request. Then my membership provider will be invoked for each method call and do whatever it needs to do to get the authorisation for the method. Sorry for the dump of questions but would be great to know if Im thinking the right way about how WCF works. Thanks.

    Read the article

  • DBI::DBD package not getting installed for Perl?

    - by Liju Mathew
    Hi, We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error. ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How to fix this? Do we need root access to resolve this? Appreciate the help in advance. Thanks, Mathew Liju

    Read the article

  • People not respecting good practices at workplace

    - by VexXtreme
    Hi There are some major issues in my company regarding practices, procedures and methodologies. First of all, we're a small firm and there are only 3-4 developers, one of which is our boss who isn't really a programmer, he just chimes in now and then and tries to do code some simple things. The biggest problems are: Major cowboy coding and lack of methodologies. I've tried explaining to everyone the benefits of TDD and unit testing, but I only got weird looks as if I'm talking nonsense. Even the boss gave me the reaction along the lines of "why do we need that? it's just unnecessary overhead and a waste of time". Nobody uses design patterns. I have to tell people not to write business logic in code behind, I have to remind them not to hardcode concrete implementations and dependencies into classes and cetera. I often feel like a nazi because of this and people think I'm enforcing unnecessary policies and use of design patterns. The biggest problem of all is that people don't even respect common sense security policies. I've noticed that college students who work on tech support use our continuous integration and source control server as a dump to store their music, videos, series they download from torrents and so on. You can imagine the horror when I realized that most of the partition reserved for source control backups was used by entire seasons of TV series and movies. Our development server isn't even connected to an UPS and surge protection. It's just plugged straight into the wall outlet. I asked the boss to buy surge protection, but he said it's unnecessary. All in all, I like working here because the atmosphere is very relaxed, money is good and we're all like a family (so don't advise me to quit), but I simply don't know how to explain to people that they need to stick to some standards and good practices in IT industry and that they can't behave so irresponsibly. Thanks for the advice

    Read the article

  • How does one convert 16-bit RGB565 to 24-bit RGB888?

    - by jleedev
    I’ve got my hands on a 16-bit rgb565 image (specifically, an Android framebuffer dump), and I would like to convert it to 24-bit rgb888 for viewing on a normal monitor. The question is, how does one convert a 5- or 6-bit channel to 8 bits? The obvious answer is to shift it. I started out by writing this: uint16_t buf; while (read(0, &buf, sizeof buf)) { unsigned char red = (buf & 0xf800) >> 11; unsigned char green = (buf & 0x07c0) >> 5; unsigned char blue = buf & 0x003f; putchar(red << 3); putchar(green << 2); putchar(blue << 3); } However, this doesn’t have one property I would like, which is for 0xffff to map to 0xffffff, instead of 0xf8fcf8. I need to expand the value in some way, but I’m not sure how that should work. The Android SDK comes with a tool called ddms (Dalvik Debug Monitor) that takes screen captures. As far as I can tell from reading the code, it implements the same logic; yet its screenshots are coming out different, and white is mapping to white. Here’s the raw framebuffer, the smart conversion by ddms, and the dumb conversion by the above algorithm. (By the way, this conversion is implemented in ffmpeg, but it’s just performing the dumb conversion listed above, leaving the LSBs at all zero.) I guess I have two questions: What’s the most sensible way to convert rgb565 to rgb888? How is DDMS converting its screenshots?

    Read the article

  • MSI wrapper for msdeploy?

    - by Nate
    Just moved to vs2010 and found that deployment is quite different. The old way I did things (vs2008) was as follows: 1) I right-click on my web app and "Add web deployment project" 2) Start a web setup project, dump that web deployment project output in it, and add any custom installer actions (connection strings or other user input) as necessary using the custom installer class. Easy! Now that I don't have the "Add web deployment project" option (or can't find it!?!?), msdeploy seems to be the thing to do. I have successfully got this to work via command line, but I still need some custom actions. In this example, we have separated our web services deployment from our UI deployment (Silverlight) so that they can be hosted separately or together. So... during/after install I need to know from the user where the web services are located. So I tried this: 1) Start a web setup project, and include all of the deploy package files (.deploy.cmd, /setparameters.xml, etc) 2) Gather user input in my custom installer class, and shell execute the deploy.cmd. Problem is... the deploy fails on deleting the zip file since it's use by another process (I assume it's my installer). Anyone have any ideas on how to get around this? Or is there a better way to accomplish this task? Any input would be appreciated! Nate

    Read the article

  • PLT Scheme Memory

    - by Eric
    So I need some help with implementing a Make-memory program using Scheme. I need two messages 'write and 'read. So it would be like (mymem 'write 34 -116) and (mymem 'read 99) right? and (define mymem (make-memory 100)).....How would I implement this in scheme? using an Alist???I need some help coding it. I have this code which makes make-memory a procedure and when you run mymem you get ((99.0)) and what i need to do is recur this so i get an alist with dotted pairs to ((0.0)). So any suggestions on how to code this?? Does anyone have any ideas what I could do to recur and make messages Write and read?? (define make-memory (lambda (n) (letrec ((mem '()) (dump (display mem))) (lambda () (if (= n 0) (cons (cons n 0) mem) mem) (cons (cons (- n 1) 0) mem)) (lambda (msg loc val) (cond ((equal? msg 'read) (display (cons n val))(set! n (- n 1))) ((equal? msg 'write) (set! mem (cons val loc)) (set! n (- n 1)) (display mem))))))) (define mymem (make-memory 100)) Yes this is an assignment but I wrote this code. I just need some help or direction. And yes I do know about variable-length argument lists.

    Read the article

  • BCP task hangs while executing

    - by user350374
    Hey guys, We have a HPC node that runs some of our tasks in it. I have a task in my .net project that kicks the bcp utility on the HPC node and the output of the query that I have runs into 9 Mb. When the HPC node runs this task the output of the query is dumped into a file and then after it dumps around 5mb of data it suddenly stops dumping any more data and this happens all the time. (Please note this isnt any data issue as its not crashing on a particular row every time). this may or may not be of significance but I dump the data into a different server which has adequate permissions set. I have run the command with the same query directly on the hpc node and on other comps and it gives the right output. I'm running the bcp command as follows: var processInfo = new ProcessStartInfo("bcp.exe", argument) { RedirectStandardOutput = true, RedirectStandardError = true, CreateNoWindow = true, UseShellExecute = false }; var proc = new Process { StartInfo = processInfo, EnableRaisingEvents = true }; proc.Exited += new EventHandler(bcp_log); proc.Start(); proc.WaitForExit(); So my code actually waits for each bcp task to run before it goes ahead as I call it multiple times. FYI to remind you again it only fails when my o/p exceeds a certain no of bytes in this case approx 5mb. Any help is much appreciated.

    Read the article

  • dumping the source code for an anonymous function

    - by intuited
    I'm working with a lot of anonymous functions, ie functions declared as part of a dictionary, aka "methods". It's getting pretty painful to debug, because I can't tell what function the errors are happening in. Vim's backtraces look like this: Error detected while processing function NamedFunction..2111..2105: line 1: E730: using List as a String This trace shows that the error occurred in the third level down the stack, on the first line of anonymous function #2105. IE NamedFunction called anonymous function #2111, which called anonymous function #2105. NamedFunction is one declared through the normal function NamedFunction() ... endfunction syntax; the others were declared using code like function dict.func() ... endfunction. So obviously I'd like to find out which function has number 2105. Assuming that it's still in scope, it's possible to find out what Dictionary entry references it by dumping all of the dictionary variables that might contain that reference. This is sort of awkward and it's difficult to be systematic about it, though I guess I could code up a function to search through all of the loaded dictionaries for a reference to that function, watching out for circular references. Although to be really thorough, it would have to search not only script-local and global dictionaries, but buffer-local dictionaries as well; is there a way to access another buffer's local variables? Anyway I'm wondering if it's possible to dump the source code for the anonymous function instead. This would be a lot easier and probably more reliable.

    Read the article

  • To GAC, or not to GAC?

    - by Jagd
    I have a data access layer (DAL) that is written in ASP.NET 3.5 and uses the Microsoft patterns & practices libraries (hereafter referred to as P&P) in order to accomplish its data access. I installed P&P and it resides in my GAC, so, logically, my DAL references it in the GAC. Therefore, the P&P libraries are never pulled down to the bin folder of my DAL. I use this DAL project in at least five (more than that even, but I'm too lazy to try to count them all) different websites. And this has all worked just fine for me because I'm the only developer who works on these websites. But, now I have other developers who are going to work on some of these websites. The problem: if a developer pulls the DAL project down from our code repository, it won't build for them if they don't have the P&P libraries installed. My question: should I expect the developers to install the P&P libraries, or should I just dump them in the bin folder and be done with it? I realize that dumping them into the bin folder is probably the easiest way to deal with the problem, but I've never been a big fan of the bin folder if I can reference them in the GAC instead.

    Read the article

  • How to dynamically set control IDs inside a repeater template?

    - by Bryan
    Here is a perplexing issue I have not seen a good answer to on StackOverflow, although there a couple stabs at it... I have a situation where I'd like to do this: <asp:Repeater ID="MyRepeater" runat="server" OnItemDataBound="MyRepeater_ItemDataBound"> <ItemTemplate> <li id="id?"> All the other stuff </li> </ItemTemplate> </asp:Repeater> The question... is how do I get the ID of my <li> elements to be id1, id2, id3, etc., based on the ItemIndex they are bound to? So far the most... er..."elegant" solution I've come up with is to replace the <li> with an asp:Literal and dump the <li...>' text. But that just feels... so wrong. And no, I'm not using ASP.NET 4.0, which I've read will provide this functionality.

    Read the article

  • How to use iPhone SDK Private APIs

    - by eagle
    I haven't found a comprehensive list of the steps that are required to use a private API from the iPhone Library. In particular, I would like to know how to get header files, if they are even required, how to get it to compile (when I simply add the header, it complains that the functions aren't defined), and what resources one can use to learn about private APIs (e.g. from other user's experiences, such as http://iphonedevwiki.net/ which has a few). I've read in other places that people recommend using class-dump to get the headers. Are there any alternative methods? I've noticed that there are some repositories of iPhone Private SDKs, what are the most up to date resources you would recommend? Most of the previous questions about documentation of private APIs, have all linked to Erica Sadun's website, which doesn't seem to have documentation anymore (all the links on the left are crossed out). Please save the comments about not using private API's... I know of the biggest risks: App will get rejected by Apple. App will break in future updates to the OS. I'm talking about legitimate uses, such as: Private application use (e.g. for unit testing, or messing around to see what's possible)

    Read the article

  • Unsupported operand types when value is integer

    - by Adam Tester
    I'm getting this error when trying to add 2 to an integer. I am using the Codeigniter framework. Fatal error: Unsupported operand types in D:\wamp\www\application\libraries\Gen_images.php on line 180 Here is where its called: // Now process the image var_dump($this->upload->data('image_width')); $this->gen_login->resize($file_name, $this->upload->data('image_width'), $this->upload->data('image_height')); I get the error on line 180 which is: $config['width'] = $width + 2; So I thought $width must be an array or string so I wouldn't be able to add to it, but the var dump shows this: array 'file_name' => string 'genyx_1341414096.jpg' (length=20) 'file_type' => string 'image/jpeg' (length=10) 'file_path' => string 'D:/wamp/www/beer/uploads/' (length=25) 'full_path' => string 'D:/wamp/www/beer/uploads/genyx_1341414096.jpg' (length=45) 'raw_name' => string 'genyx_1341414096' (length=16) 'orig_name' => string 'genyx_1341414096.jpg' (length=20) 'client_name' => string '294207_177080222375077_100002193022560_361510_991268937_s.jpg' (length=61) 'file_ext' => string '.jpg' (length=4) 'file_size' => float 55.85 'is_image' => boolean true 'image_width' => int 720 'image_height' => int 479 'image_type' => string 'jpeg' (length=4) 'image_size_str' => string 'width="720" height="479"' (length=24) Can anyone help me?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >