Search Results

Search found 19408 results on 777 pages for 'output formats'.

Page 151/777 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Run a specific Windows application with different regional settings?

    - by robertc
    We have an old VB6 application, written in the UK, which we want to run on a client's server in Canada. The application must have some hard-coded date formats in it somewhere, as it's misinterpreting dates, eg, 6th April is getting recorded as 4th June. I know I could change the user's date format, but this would affect all applications. Is it possible, perhaps with some sort of wrapper script, to set specific regional settings for this single application?

    Read the article

  • Video Player/Library for Ubuntu with ratings and thumbs

    - by greggannicott
    Hi. I've just made the switch to Ubuntu on my main PC and I've been looking for a media player that can: Play all the usual video formats Rate (and ideally, tag) each file Display thumbnails for each file Other than that there isn't much I'm after. Banshee comes close, but doesn't display thumbnails. I've Google'd lots but I'm running out of search terms to try. Does anyone have any suggestions? Cheers!

    Read the article

  • iptables rules to allow HTTP traffic to one domain only

    - by Zenet
    I need to configure my machine as to allow HTTP traffic to/from serverfault.com only. All other websites, services ports are not accessible. I came up with these iptables rules: #drop everything iptables -P INPUT DROP iptables -P OUTPUT DROP #Now, allow connection to website serverfault.com on port 80 iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT #allow loopback iptables -I INPUT 1 -i lo -j ACCEPT It doesn't work quite well: After I drop everything, and move on to rule 3: iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT I get this error: iptables v1.4.4: host/network `serverfault.com' not found Try `iptables -h' or 'iptables --help' for more information. Do you think it is related to DNS? Should I allow it as well? Or should I just put IP addresses in the rules? Do you think what I'm trying to do could be achieved with simpler rules? How? I would appreciate any help or hints on this. Thanks a lot!

    Read the article

  • Is it possible to use ffmpeg to trim off X seconds from the beginning of a video with an unspecified length?

    - by marcelebrate
    I need to trim the just the first 1 or 2 seconds off of a series of FLV recordings of varying, unspecified lengths. I've found plenty of resources for extracting a specified duration from a video (e.g. 30 second clips), but none for continuing to the end of a video. Both of these attempts just yield a copied version of the video, sans desired trimming: ffmpeg -ss 2 -vcodec copy -acodec copy -i input.flv output.flv ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.flv output.flv The thought on the second one was: perhaps if I specified a length beyond what was possible, it'd just go to the end. No dice. I know it's not an issue with codecs or using seconds instead of timecode since the following worked a charm: ffmpeg -ss 2 -t 5 -vcodec copy -acodec copy -i input.flv output.flv Any other ideas? I'm open to using other (Windows-based) command line tools, however am strongly favoring ffmpeg since I'm already using it for thumbnail creation and am familiar with it. If it helps, my videos will all be under 2 minutes.

    Read the article

  • Iptables - forward email ports?

    - by Emmet Brown
    im trying to open some local ports (LAN) and then re-direct them to another server (WAN) using iptables. Here is my config: #WAN allow-hotplug eth1 auto eth1 iface eth1 inet static #Tarjeta red WAN address 192.168.2.2 gateway 192.168.2.1 netmask 255.255.255.0 #LAN allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.16.6 netmask 255.255.255.0 network 192.168.16.0 broadcast 192.168.16.255 I try this: iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 110 -j DNAT --to 200.40.30.218:110 iptables -A FORWARD -p tcp -i eth0 -o eth1 -d 200.40.30.218 --dport 110 -j ACCEPT iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 25 -j DNAT --to 200.40.30.218:25 iptables -A FORWARD -p tcp -i eth0 -o ethq -d 200.40.30.218 --dport 25 -j ACCEPT but, it did not work. i also try changeing eth0 to eth1 (and eth1 to eth0) but nothing happened. Starting Nmap 5.00 ( http://nmap.org ) at 2011-10-03 14:44 UYST Interesting ports on 192.168.16.6: Not shown: 997 closed ports PORT STATE SERVICE 22/tcp open ssh 111/tcp open rpcbind 8080/tcp open http-proxy Im running debian. Can u guys help me to check what is happening? edit: IPTABLES-SAVE # Generated by iptables-save v1.4.8 on Mon Oct 3 15:43:14 2011 *mangle :PREROUTING ACCEPT [139993:77867651] :INPUT ACCEPT [139385:77761761] :FORWARD ACCEPT [186:12071] :OUTPUT ACCEPT [173556:74341650] :POSTROUTING ACCEPT [173734:74352988] COMMIT # Completed on Mon Oct 3 15:43:14 2011 # Generated by iptables-save v1.4.8 on Mon Oct 3 15:43:14 2011 *nat :PREROUTING ACCEPT [1649:190626] :POSTROUTING ACCEPT [6729:339646] :OUTPUT ACCEPT [6697:337660] -A PREROUTING -i eth0 -p tcp -m tcp --dport 110 -j DNAT --to-destination 200.40.30.218:110 -A PREROUTING -i eth0 -p tcp -m tcp --dport 25 -j DNAT --to-destination 200.40.30.218:25 COMMIT # Completed on Mon Oct 3 15:43:14 2011 # Generated by iptables-save v1.4.8 on Mon Oct 3 15:43:14 2011 *filter :INPUT ACCEPT [138307:77066136] :FORWARD ACCEPT [168:11207] :OUTPUT ACCEPT [172288:73655708] -A FORWARD -d 200.40.30.218/32 -i eth0 -o eth1 -p tcp -m tcp --dport 110 -j ACCEPT -A FORWARD -d 200.40.30.218/32 -i eth0 -o ethq -p tcp -m tcp --dport 25 -j ACCEPT COMMIT # Completed on Mon Oct 3 15:43:14 2011 Regards

    Read the article

  • What video format works both on Windows and all Apple products out-of-the-box?

    - by Seppo Silaste
    What format and encoding should I use for video that would play out-of-the-box on both Windows and all Apple products (iPad, iPhone, etc.)? I know that the correct way to render an animation, is to first render it as images, but this is overkill when rendering short animations of a work in progress for the client to comment on. I have one client that views the vids on Windows and another that views them on iPad and iPhone. I'm hoping I could render just one video instead of rendering in two different formats.

    Read the article

  • Custom metadata fields in Windows file manager (or other software)

    - by Dave Gaebler
    I'm trying to organize a collection of maybe 500 or so journal articles, stored in a combination of .pdf and .djvu formats. I'd like to be able to sort the collection by author(s), title, journal name, year, and subject keywords. Is there a way to create metadata fields for this information in the Windows file system (similar to how .mp3 files come with tags for album, title, track length, etc)? Or, if not, is there some software (preferably free) that can do something similar?

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • Three ways to upload/post/convert iMovie to YouTube

    - by user351686
    For Mac users, iMovie is probably a convenient tool for making, editing their own home movies so as to upload to YouTube for sharing with more people. However, uploading iMovie files to YouTube can't be always a smooth run, I did notice many people complaining about it. This article is delivered for guiding those who are haunted by the nightmare by providing three common ways to upload iMovie files to YouTube. YouTube and iMovie YouTube is the most popular video sharing website for users to upload, share and view videos. It empowers anyone with an Internet connection the ability to upload video clips and share them with friends, family and the world. Users are invited to leave comments, pick favourites, send messages to each other and watch videos sorted into subjects and channels. YouTube accepts videos uploaded in most container formats, including WMV (Windows Media Video), 3GP (Cell Phones), AVI (Windows), MOV (Mac), MP4 (iPod/PSP), FLV (Adobe Flash), MKV (H.264). These include video codecs such as MP4, MPEG and WMV. iMovie is a common video editing software application comes with every Mac for users to edit their own home movies. It imports video footage to the Mac using either the Firewire interface on most MiniDV format digital video cameras, the USB port, or by importing the files from a hard drive where users can edit the video clips, add titles, and add music. Since 1999, eight versions of iMovie have been released by Apple, each with its own functions and characteristic, and each of them deal with videos in a way more or less different. But the most common formats handled with iMovie if specialty discarded as far as to my research are MOV, DV, HDV, MPEG-4. Three ways for successful upload iMovie files to YouTube Solution one and solution two suitable for those who are 100 certainty with their iMovie files which are fully compatible with YouTube. For smooth uploading, you are required to get a YouTube account first. Solution 1: Directly upload iMovie to YouTube Step 1: Launch iMovie, select the project you want to upload in YouTube. Step 2: Go to the file menu, click Share, select Export Movie Step 3: Specify the output file name and directory and then type the video type and video size. Solution 2: Post iMovie to YouTube straightly Step 1: Launch iMovie, choose the project you want to post in YouTube Step 2: From the Share menu, choose YouTube Step 3: In the pop-up YouTube windows, specify the name of your YouTube account, the password, choose the Category and fill in the description and tags of the project. Tick Make this movie more private on the bottom of the window, if possible, to limit those who can view the project. Click Next, and then click Publish. iMovie will automatically export and upload the movie to YouTube. Step 4: Click Tell a Friend to email friends and your family about your film. You are also allowed to copy the URL from Tell a Friend window and paste it into an email you created in your favourite email application if you like. Anyone you send to email to will be able to follow the URL directly to your movie. Note: Videos uploaded to YouTube are limited to ten minutes in length and a file size of 2GB. Solution 3: Upload to iMovie after conversion If neither of the above mentioned method works, there is still a third way to turn to. Sometimes, your iMovie files may not be recognized by YouTube due to the versions of iMovie (settings and functions may varies among versions), video itself (video format difference because of file extension, resolution, video size and length), compatibility (videos that are completely incompatible with YouTube). In this circumstance, the best and reliable method is to convert your iMovie files to YouTube accepted files, iMovie to YouTube converter will be inevitably the ideal choice. iMovie to YouTube converter is an elaborately designed tool for convert iMovie files to YouTube workable WMV, 3GP, AVI, MOV, MP4, FLV, MKV for smooth uploading with hard-to-believe conversion speed and second to none output quality. It can also convert between almost all popular popular file formats like AVI, WMV, MPG, MOV, VOB, DV, MP4, FLV, 3GP, RM, ASF, SWF, MP3, AAC, AC3, AIFF, AMR, WAV, WMA etc so as to put on various portable devices, import to video editing software or play on vast amount video players. iMovie to YouTube converter can also served as an excellent video editing tool to meet your specific program requirements. For example, you can cut your video files to a certain length, or split your video files to smaller ones and select the proper resolution suitable for demands of YouTube by Clip or Settings separately. Crop allows you to cut off unwanted black edges from your videos. Besides, you can also have a good command of the whole process or snapshot your favourite pictures from the preview window. More can be expected if you have a try.

    Read the article

  • How to model a relationship that NHibernate (or Hibernate) doesn’t easily support

    - by MylesRip
    I have a situation in which the ideal relationship, I believe, would involve Value Object Inheritance. This is unfortunately not supported in NHibernate so any solution I come up with will be less than perfect. Let’s say that: “Item” entities have a “Location” that can be in one of multiple different formats. These formats are completely different with no overlapping fields. We will deal with each Location in the format that is provided in the data with no attempt to convert from one format to another. Each Item has exactly one Location. “SpecialItem” is a subtype of Item, however, that is unique in that it has exactly two Locations. “Group” entities aggregate Items. “LocationGroup” is as subtype of Group. LocationGroup also has a single Location that can be in any of the formats as described above. Although I’m interested in Items by Group, I’m also interested in being able to find all items with the same Location, regardless of which group they are in. I apologize for the number of stipulations listed above, but I’m afraid that simplifying it any further wouldn’t really reflect the difficulties of the situation. Here is how the above could be diagrammed: Mapping Dilemma Diagram: (http://www.freeimagehosting.net/uploads/592ad48b1a.jpg) (I tried placing the diagram inline, but Stack Overflow won't allow that until I have accumulated more points. I understand the reasoning behind it, but it is a bit inconvenient for now.) Hmmm... Apparently I can't have multiple links either. :-( Analyzing the above, I make the following observations: I treat Locations polymorphically, referring to the supertype rather than the subtype. Logically, Locations should be “Value Objects” rather than entities since it is meaningless to differentiate between two Location objects that have all the same values. Thus equality between Locations should be based on field comparisons, not identifiers. Also, value objects should be immutable and shared references should not be allowed. Using NHibernate (or Hibernate) one would typically map value objects using the “component” keyword which would cause the fields of the class to be mapped directly into the database table that represents the containing class. Put another way, there would not be a separate “Locations” table in the database (and Locations would therefore have no identifiers). NHibernate (or Hibernate) do not currently support inheritance for value objects. My choices as I see them are: Ignore the fact that Locations should be value objects and map them as entities. This would take care of the inheritance mapping issues since NHibernate supports entity inheritance. The downside is that I then have to deal with aliasing issues. (Meaning that if multiple objects share a reference to the same Location, then changing values for one object’s Location would cause the location to change for other objects that share the reference the same Location record.) I want to avoid this if possible. Another downside is that entities are typically compared by their IDs. This would mean that two Location objects would be considered not equal even if the values of all their fields are the same. This would be invalid and unacceptable from the business perspective. Flatten Locations into a single class so that there are no longer inheritance relationships for Locations. This would allow Locations to be treated as value objects which could easily be handled by using “component” mapping in NHibernate. The downside in this case would be that the domain model becomes weaker, more fragile and less maintainable. Do some “creative” mapping in the hbm files in order to force Location fields to be mapped into the containing entities’ tables without using the “component” keyword. This approach is described by Colin Jack here. My situation is more complicated than the one he describes due to the fact that SpecialItem has a second Location and the fact that a different entity, LocatedGroup, also has Locations. I could probably get it to work, but the mappings would be non-intuitive and therefore hard to understand and maintain by other developers in the future. Also, I suspect that these tricky mappings would likely not be possible using Fluent NHibernate so I would use the advantages of using that tool, at least in that situation. Surely others out there have run into similar situations. I’m hoping someone who has “been there, done that” can share some wisdom. :-) So here’s the question… Which approach should be preferred in this situation? Why?

    Read the article

  • yui compressor maven plugin doesnt compress the js files

    - by hanumant
    I am using yui compressor to compress the js files in my web app. I have configured the plugin as indicated on yui maven plugin site yui compressor maven plugin. This is the pom plugin conf <plugin> <groupId>net.sf.alchim</groupId> <artifactId>yuicompressor-maven-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <phase>compile</phase> <goals> <goal>jslint</goal> <goal>compress</goal> </goals> </execution> </executions> <configuration> <failOnWarning>true</failOnWarning> <nosuffix>true</nosuffix> <force>true</force> <aggregations> <aggregation> <!-- remove files after aggregation (default: false) --> <removeIncluded>false</removeIncluded> <!-- insert new line after each concatenation (default: false) --> <insertNewLine>false</insertNewLine> <output>${project.basedir}/${webcontent.dir}/js/compressedAll.js</output> <!-- files to include, path relative to output's directory or absolute path--> <!--inputDir>base directory for non absolute includes, default to parent dir of output</inputDir--> <includes> <include>**/autocomplete.js</include> <include>**/calendar.js</include> <include>**/dialogs.js</include> <include>**/download.js</include> <include>**/folding.js</include> <include>**/jquery-1.4.2.min.js</include> <include>**/jquery.bgiframe.min.js</include> <include>**/jquery.loadmask.js</include> <include>**/jquery.printelement-1.1.js</include> <include>**/jquery.tablesorter.mod.js</include> <include>**/jquery.tablesorter.pager.js</include> <include>**/jquery.dialogs.plugin.js</include> <include>**/jquery.ui.autocomplete.js</include> <include>**/jquery.validate.js</include> <include>**/jquery-ui-1.8.custom.min.js</include> <include>**/languageDropdown.js</include> <include>**/messages.js</include> <include>**/print.js</include> <include>**/tables.js</include> <include>**/tabs.js</include> <include>**/uwTooltip.js</include> </includes> <!-- files to exclude, path relative to output's directory--> </aggregation> </aggregations> </configuration> <dependencies> <dependency> <groupId>rhino</groupId> <artifactId>js</artifactId> <scope>compile</scope> <version>1.6R5</version> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-project</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency><dependency> <groupId>net.sf.retrotranslator</groupId> <artifactId>retrotranslator-runtime</artifactId> <version>1.2.9</version> <scope>runtime</scope> </dependency> </dependencies> </plugin> And here is the log at compress time These will use the artifact files already in the core ClassRealm instead, to allow them to be included in PluginDescriptor.getArtifacts(). [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:jslint' [DEBUG] (f) failOnWarning = true [DEBUG] (f) jswarn = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:jslint {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:compress' -- [DEBUG] (f) removeIncluded = false [DEBUG] (f) insertNewLine = false [DEBUG] (f) output = C:\test\WebContent\js\compressedAll.js [DEBUG] (f) includes = [**/autocomplete.js, **/calendar.js, **/dialogs.js, **/download.js, **/folding.js, **/jquery-1.4.2.min.js, **/jquery.bgifram e.min.js, **/jquery.loadmask.js, **/jquery.printelement-1.1.js, **/jquery.tablesorter.mod.js, **/jquery.tablesorter.pager.js, **/jquery.dialogs.p lugin.js, **/jquery.ui.autocomplete.js, **/jquery.validate.js, **/jquery-ui-1.8.custom.min.js, **/languageDropdown.js, **/messages.js, **/print.js, * */tables.js, **/tabs.js, **/uwTooltip.js] [DEBUG] (f) aggregations = [net.sf.alchim.mojo.yuicompressor.Aggregation@65646564] [DEBUG] (f) disableOptimizations = false [DEBUG] (f) encoding = Cp1252 [DEBUG] (f) failOnWarning = true [DEBUG] (f) force = true [DEBUG] (f) gzip = false [DEBUG] (f) jswarn = true [DEBUG] (f) linebreakpos = 0 [DEBUG] (f) nomunge = false [DEBUG] (f) nosuffix = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) preserveAllSemiColons = false [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) statistics = true [DEBUG] (f) suffix = -min [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:compress {execution: default}] [INFO] generate aggregation : C:\test\WebContent\js\compressedAll.js [INFO] compressedAll.js (407505b) [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-resources-plugin:2.2:testResources' -- [DEBUG] (f) filters = [] [DEBUG] (f) outputDirectory = C:\test\target\test-classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\test , PatternSet [includes: {}, excludes: {**/*.class, **/*.java}]}}] [DEBUG] -- end configuration -- The problem is the files are getting aggregated into one file but without compressing. The link above uses version 1.1 and the plugin version I am using is 0.7.1. Is this going to make any diff. Can someone tell what is wrong here. PS: I have obfuscated some text in log to follow the compliance in my firm. So you may find it mismatching somewhere.

    Read the article

  • Replacement for Vern Buerg's list.com in 64 bit Windows 7

    - by Kevin
    I would like to find a replacement for list.com, specifically the ability to accept piped input. For example: p4 sync -n | list which accepts the output of the perforce command and displays the results in the viewer/editor for manipulation or saving. I know that I would send the output to a file and then open the file in the viewer/editor but I use it for temporary results. List.com doesn't work on 64 bit Windows 7.

    Read the article

  • Sharepoint xsl ddwrt FormatDateTime issue

    - by mahumba
    I use the ddwrt:FormatDateTime function to format the output like yyyyMMdd. inputs like 01/01/2010 work fine but when the day gets over the number 12 the output is an emtpy string. ddwrt:FormatDate(string(@myDate), 1033, 'yyyyMMdd') At first I thought it could be a language specific problem but even none of those combinations work: 13/01/2010 01/13/2010 13.01.2010

    Read the article

  • How the yin-yang puzzle works?

    - by Hrundik
    I'm trying to grasp the semantics of call/cc in Scheme, and the Wikipedia page on continuations shows the yin-yang puzzle as an example: (let* ((yin ((lambda (cc) (display #\@) cc) (call-with-current-continuation (lambda (c) c)))) (yang ((lambda (cc) (display #\*) cc) (call-with-current-continuation (lambda (c) c)))) ) (yin yang)) It should output @\*@\**@\**\*@\**\**@..., but I don't understand why; I'd expect it to output @\*@\**\**\**\**\*... Can somebody explain in detail why the yin-yang puzzle works the way it works?

    Read the article

  • Automating telnet with powershell

    - by Guru Je
    Ok, it's little convoluted. I am trying to write a script to automate telneting to a machine, execute few commands, look at the output in the telnet window, based on the output, send few more commands. Thanks for any help

    Read the article

  • Java Process "The pipe has been ended" problem

    - by Amit Kumar
    I am using Java Process API to write a class that receives binary input from the network (say via TCP port A), processes it and writes binary output to the network (say via TCP port B). I am using Windows XP. The code looks like this. There are two functions called run() and receive(): run is called once at the start, while receive is called whenever there is a new input received via the network. Run and receive are called from different threads. The run process starts an exe and receives the input and output stream of the exe. Run also starts a new thread to write output from the exe on to the port B. public void run() { try { Process prc = // some exe is `start`ed using ProcessBuilder OutputStream procStdIn = new BufferedOutputStream(prc.getOutputStream()); InputStream procStdOut = new BufferedInputStream(prc.getInputStream()); Thread t = new Thread(new ProcStdOutputToPort(procStdOut)); t.start(); prc.waitFor(); t.join(); procStdIn.close(); procStdOut.close(); } catch (Exception e) { e.printStackTrace(); printError("Error : " + e.getMessage()); } } The receive forwards the received input from the port A to the exe. public void receive(byte[] b) throws Exception { procStdIn.write(b); } class ProcStdOutputToPort implements Runnable { private BufferedInputStream bis; public ProcStdOutputToPort(BufferedInputStream bis) { this.bis = bis; } public void run() { try { int bytesRead; int bufLen = 1024; byte[] buffer = new byte[bufLen]; while ((bytesRead = bis.read(buffer)) != -1) { // write output to the network } } catch (IOException ex) { Logger.getLogger().log(Level.SEVERE, null, ex); } } } The problem is that I am getting the following stack inside receive() and the prc.waitfor() returns immediately afterwards. The line number shows that the stack is while writing to the exe. The pipe has been ended java.io.IOException: The pipe has been ended at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:260) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) at java.io.FilterOutputStream.write(FilterOutputStream.java:80) at xxx.receive(xxx.java:86) Any advice about this will be appreciated.

    Read the article

  • WPF MediaElement source in C# 2010

    - by Alex Farber
    The sample from the book is compiled and executed successfully in VS2008. Project contains file Bear.wmw in the project directory. XAML: MediaElement Source="Bear.wmv" Build Action = Content, Copy to output directory = Copy always In C# 2008 this file is shown in the window. In C# 2010 Express file is not shown. Output directory contains this file. How can this be fixed?

    Read the article

  • Force SSRS 2008 to use SSRS 2005 CSV rendering

    - by Kash
    We are upgrading our report server from SSRS 2005 to SSRS 2008 R2. I have an issue with CSV export rendering for SSRS 2008 where the SUM of columns are appearing on the right side of the detail values in 2008 instead of the left side like in 2005 as shown in the below blocks. 117 and 131 are the sums of Column2 and Column3 respectively. SSRS 2005 CSV Output Column2_1,Column3_1,Column2,Column3 117,131,1,2 117,131,1,2 117,131,60,23 117,131,30,15 117,131,25,89 SSRS 2008 CSV Output Column2,Column3,Column2_1,Column3_1 1,2,117,131 1,2,117,131 60,23,117,131 30,15,117,131 25,89,117,131 I understand that the CSV renderer has gone through major changes in SSRS 2008 R2 with the support for charts and gauges and more importantly it provides 2 modes: the default Excel mode and Compliant mode. But neither mode helps fix this issue. The Compliant mode was supposed to be closest to that of 2005 but apparently it is not close enough for my case. My Question: Is there a way to force SSRS 2008 fall back a report to a backward compatibility mode so that it exports into a 2005 CSV format? Solution tried: a) Using 2005-based CRIs Based on this article on ExecutionLog2, if SSRS 2008 R2 encounters a report whose auto-upgrade is not possible (e.g. reports that were built with 2005-based CustomReportItem controls), those particular reports will be processed with the old Yukon engine in a "transparent backwards-compatibility mode". It seems like it falls back to its previous version mode (2005) and attempts to render it. So I tried using a 2005-based barcode CustomReportItem and deployed to a SSRS 2008 R2 report server, but it shows the same result as before though it suppressed the barcode. This would be because SSRS 2008 R2 finds a way to suppress part of the report output and displays the rest. It would be great to find a 2005-based CRI that makes SSRS 2008 R2 process it with its old Yukon engine. Please note that quite possibly, even if it uses the "old Yukon processing engine", it might still use the new CSV renderer hence it shows the same output. If that is true, then this option is moot. b) Using XML renderer We can use a custom XML renderer and then use XSLT to convert the xml to appropriate CSV but this would mean that we need to convert all our 200 reports. Hence this is not feasible. Please note that we do not have the option of having SSRS 2005 and SSRS 2008 R2 deployed side by side.

    Read the article

  • Problem with AquaTerm on Snow Leopard

    - by cheetah
    When i try to install AquaTerm on Snow leopard from MacPorts i got this: ---> Computing dependencies for aquaterm ---> Building aquaterm Error: Target org.macports.build returned: shell command "cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm" && /usr/bin/xcodebuild -target "AquaTerm" -configuration Deployment build OBJROOT=build/ SYMROOT=build/ MACOSX_DEPLOYMENT_TARGET=10.6 ARCHS=x86_64 SDKROOT= USER_APPS_DIR=/Applications/MacPorts FRAMEWORKS_DIR=/opt/local/Library/Frameworks" returned error 1 Command output: [WARN]Warning: The Copy Bundle Resources build phase contains this target's Info.plist file 'AquaTerm.framework-Info.plist'. CopyPlistFile build/Deployment/AquaTerm.framework/Versions/A/Resources/AquaTerm.framework-Info.plist AquaTerm.framework-Info.plist cd /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist AquaTerm.framework-Info.plist --outdir /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.framework/Versions/A/Resources error: can't exec '/Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist' (No such file or directory) Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist failed with exit code 71 Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copyplist failed with exit code 71 === BUILD NATIVE TARGET AquaTerm OF PROJECT AquaTerm WITH CONFIGURATION Deployment === Check dependencies Warning: Multiple build commands for output file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/help.html [WARN]Warning: Multiple build commands for output file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/help.html CopyTiffFile build/Deployment/AquaTerm.app/Contents/Resources/Cross.tiff English.lproj/Cross.tiff cd /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff English.lproj/Cross.tiff --outdir /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources error: can't exec '/Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff' (No such file or directory) Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff failed with exit code 71 Command /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copytiff failed with exit code 71 ** BUILD FAILED ** The following build commands failed: AQTFwk: CopyPlistFile /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.framework/Versions/A/Resources/AquaTerm.framework-Info.plist AquaTerm.framework-Info.plist AquaTerm: CopyTiffFile /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_aqua_aquaterm/work/aquaterm/build/Deployment/AquaTerm.app/Contents/Resources/Cross.tiff English.lproj/Cross.tiff (2 failures) Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. How i can solve this problem?

    Read the article

  • dividing double by double gives weird results - Java

    - by Aly
    Hi, I am trying to do the following 33.33333333333333/100.0 to get 0.333333333333333 however when I run System.out.println(33.33333333333333/100.0); I get 0.33333333333333326 as the output, similarly when I run System.out.println(33.33333333333333/1000.0); I get 0.033333333333333326 as the output. Does anyone know why, and how I can get the correct value (without loss of decimal places). Thanks

    Read the article

  • Powershell: Get-Process Returns "Invalid" VM Size

    - by dewald
    I'm running PowerShell 2.0 on Windows XP SP3 and I execute: PS> ps firefox And it returns: Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 859 44 340972 351580 684 9,088.22 7744 firefox However, Windows Task Manager shows the following stats for firefox.exe: Mem Usage: 354,720 K VM Size: 347,322 K Why is the VM output from PowerShell 300 MB more than that output from Windows Task Manager?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >