Search Results

Search found 24226 results on 970 pages for 'team foundation build'.

Page 344/970 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Maven building for GoogleAppEngine, forced to include JDO libraries?

    - by James.Elsey
    Hi, I'm trying to build my application for GoogleAppEngine using maven. I've added the following to my pom which should "enhance" my classes after building, as suggested on the DataNucleus documentation <plugin> <groupId>org.datanucleus</groupId> <artifactId>maven-datanucleus-plugin</artifactId> <version>1.1.4</version> <configuration> <log4jConfiguration>${basedir}/log4j.properties</log4jConfiguration> <verbose>true</verbose> </configuration> <executions> <execution> <phase>process-classes</phase> <goals> <goal>enhance</goal> </goals> </execution> </executions> </plugin> According to the documentation on GoogleAppEngine, you have the choice to use JDO or JPA, I've chosen to use JPA since I have used it in the past. When I try to build my project (before I upload to GAE) using mvn clean package I get the following output [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. Missing: ---------- 1) javax.jdo:jdo2-api:jar:2.3-ec Try downloading the file manually from the project website. Then, install it using the command: mvn install:install-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar -Dfile=/path/to/file Alternatively, if you host your own repository you can deploy the file there: mvn deploy:deploy-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] Path to dependency: 1) org.datanucleus:maven-datanucleus-plugin:maven-plugin:1.1.4 2) javax.jdo:jdo2-api:jar:2.3-ec ---------- 1 required artifact is missing. for artifact: org.datanucleus:maven-datanucleus-plugin:maven-plugin:1.1.4 from the specified remote repositories: __jpp_repo__ (file:///usr/share/maven2/repository), DN_M2_Repo (http://www.datanucleus.org/downloads/maven2/), central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Sat Apr 03 16:02:39 BST 2010 [INFO] Final Memory: 31M/258M [INFO] ------------------------------------------------------------------------ Any ideas why I should get such an error? I've searched through my entire source code and I'm not referencing JDO anywhere, so unless the app engine libraries require it, I'm not sure why I get this message.

    Read the article

  • Updating PDB files without rebuilding.

    - by amit
    Hi All, Is there a way to update the PDB file with the new source location ? I have a project which links to some libraries which are built on another machine and are debug build with the PDB file. I cannot put a breakpoint in the files which are compiled in the libs. These libs take more than 4 hours to build so I dont want to buid them on my machine. Is there a way where i can make the compiler use the new source paths. I am using VS 2005 pro c++. Thanks Amit

    Read the article

  • How to detect filesystem has changed in java

    - by Alfred
    Hi all, I would like to know how to efficiently implement filesystem changes in java? Say I got a file in a folder and modify that file. I would like to be notified by java about this change as soon as possible(no frequently polling if possible.). Because I think I could call java.io.file.lastModified every few seconds but I don't like the sound of that solution at all. alfred@alfred-laptop:~/testje$ java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) Many thanks, Alfred

    Read the article

  • Image Viewer application, Image processing with Display Data.

    - by Harsha
    Hello All, I am working on Image Viewer application and planning to build in WPF. My Image size are usually larger than 3000x3500. After searching for week, I got sample code from MSDN. But it is written in ATL COM. So I am planning to work and build the Image viewer as follows: After reading the Image I will scale down to my viewer size, viwer is around 1000x1000. Lets call this Image Data as Display Data. Once displaying this data, I will work only this Display data. For all Image processing operation, I will use this display data and when user choose to save the image, I will apply all the operation to original Image data. My question is, Is is ok to use Display data for showing and initial image processing operations.

    Read the article

  • software that meets all needs in a project

    - by taz
    Hello all, I have a got couple of software projects that I want to run with my friends(max 10 persons) privately(at least for now). But I'm kind of lost between software management systems. I am not even sure about the definitions of my needs. Dear all, what is the definition/name of the system/software that meets my needs listed below? Continuous Integration? And please suggest me a good ALL-IN-ONE instance of it: project roadmap/planning project resource(people) allocation project issue&bug tracking project mailing list project forum project wiki source control server source control client repository change notifier client build system(like scons) nightly build automation IDE integration(VS) Note: I tried Redmine and liked it, but found it kind of slow. All-in-one kind ones will be the most appreciated but if your suggestion includes more than 3 softwares, please suggest me the ones that work together painlessly. thanks in advance..

    Read the article

  • Using TFSBuild to publish ClickOnce files that include prerequisites

    - by icancsharp
    I have declared an MSBuild tag in my TFSBuild project in order to publish ClickOnce files for my project. WSE 3.0 and .Net 3.0 are both pre-requisites for the application, so in Visual Studio I have ticked those as pre-requisites. When I build from Visual Studio, it creates a setup.exe file that I can publish to my web site. When you browse to this setup.exe file it installs WSE3.0 and .Net 3.0 and then continues to install my application, which works well. If I get TFS to create the click once files using the MSBuild tag in the TFSBuild file, it creates the setup.exe file, which I can publish to my website in the same way (along with all the other ClickOnce files). When I browse to setup.exe now, however, the prerequisites don't get installed and therefore my program does not run. Does anyone know how to get TFS to build a correct setup.exe file that properly bootstraps my prerequisites?

    Read the article

  • .apk signing fails even with Sun JDK (java.lang.NoClassDefFoundError: com.android.jarutils.DebugKeyP

    - by ianweller
    I'm having an interesting problem signing my Android application, whether or not I'm using a debug key. Regardless of the JDK I have installed to /usr/bin/{java,keytool,jarsigner} (OpenJDK or Sun's JDK) it will always give the following output after compiling successfully: -package-debug-sign: [apkbuilder] Creating RemoteNotify-debug-unaligned.apk and signing it with a debug key... BUILD FAILED /home/ianweller/AndroidSDK/platforms/android-7/templates/android_rules.xml:281: The following error occurred while executing this line: /home/ianweller/AndroidSDK/platforms/android-7/templates/android_rules.xml:152: java.lang.NoClassDefFoundError: com.android.jarutils.DebugKeyProvider The application was built and signed just fine by Eclipse with the ADT plugin (even without Sun's JDK installed). I'm on Fedora 12. I'm wanting to get my code out of Eclipse and move it into a git repository, but being unable to build it from ant will not allow this to happen.

    Read the article

  • embed ttf as embeded resource can't reference it

    - by HoNgOuRu
    Hi, I've just added a ttf file to the project (c# 2008 express) as "file" and build option to embeded resource. I'm having problems when trying to set this font like this: (I know the next line is wrong...) this.label1.Font = AlarmWatch.Properties.Resources.Baby_Universe; Error 1 Cannot implicitly convert type 'byte[]' to 'System.Drawing.Font' C:\Users\hongo\Documents\Visual Studio 2008\Projects\AlarmWatch\AlarmWatch\Form1.Designer.cs 57 32 AlarmWatch I know it is byte[] cause I've set the option build as embeded resource, but...comparing with this line that is correct "this.label1.Font = new System.Drawing.Font("OCR A Extended", 24F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0)));" How can I set this.label1 to use the new font??? thanks

    Read the article

  • How can I use Code Contracts in a C++/CLI project?

    - by Daniel Wolf
    I recently stumbled upon Code Contracts and have started using them in my C# projects. However, I also have a number of projects written in C++/CLI. For C# and VB, Code Contracts offer a handy configuration panel in the project properties dialog. For a C++/CLI project, there is no such panel. From the documentation, I got the impression that adding Code Contracts support to a C++/CLI project should be a simple matter of calling some external tools as part of the build process (namely ccrefgen.exe, cccheck.exe, and ccrewrite.exe). However, the number of command line options and restrictions concerning the call sequence have me somewhat intimidated. Can anybody point me to a simple way to run the Code Contracts tools as an automated part of the build process in Visual Studio?

    Read the article

  • I do not write tests. Am I stupid?

    - by Josh Stodola
    I've done a little bit of reading on unit testing and TDD, and I've never seriously considered writing tests to such a precise extent. Granted, I am not working on any projects that are ridiculously huge. If all I build are small apps, am I stupid for not writing tests? Edit: To clarify, when I say "small apps", I mean apps that are not going to control a persons life and/or their belongings. I generally build things that are supposed to make peoples lives easier and to make them more efficient.

    Read the article

  • phing FtpDeploy "connection to host failed"

    - by Jorre
    I'm getting the following error when trying to deploy a ZIP file to a remote FTP server. I tried connecting to the server using an FTP client (filezilla) and all goes well. Also, when connecting to a public ftp like ftp.belnet.be connections work fine. I'm trying to send the file to a VSFTPD server behind a router using port forwarding. Again, this works fine from any location using Filezilla, phing is not connecting though... BUILD FAILED /deployment/build.xml:60:12: Could not connect to FTP server x.x.x.x on port 21: Connection to host failed Total time: 2 minutes 30.09 seconds

    Read the article

  • Installing bitarray in Python 2.6 on Windows

    - by John Fouhy
    I would like to install bitarray in Windows running python 2.6. I have mingw32 installed, and I have C:\Python26\Lib\distutils\distutils.cfg set to: [build] compiler = mingw32 If I type, in a cmd.exe window: C:\Documents and Settings\john\My Documents\bitarray-0.3.5>python setup.py install I get: [normal python messages skipped] C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python26\include -IC:\Python26\PC -c bitarray/_bitarray.c -o build\temp.win32-2.6\Release\bitarray\_bitarray.o bitarray/_bitarray.c:2197: error: initializer element is not constant bitarray/_bitarray.c:2197: error: (near initialization for `BitarrayIter_Type.tp_getattro') bitarray/_bitarray.c:2206: error: initializer element is not constant bitarray/_bitarray.c:2206: error: (near initialization for `BitarrayIter_Type.tp_iter') bitarray/_bitarray.c:2232: error: initializer element is not constant bitarray/_bitarray.c:2232: error: (near initialization for `Bitarraytype.tp_getattro') bitarray/_bitarray.c:2253: error: initializer element is not constant bitarray/_bitarray.c:2253: error: (near initialization for `Bitarraytype.tp_alloc') bitarray/_bitarray.c:2255: error: initializer element is not constant bitarray/_bitarray.c:2255: error: (near initialization for `Bitarraytype.tp_free') error: command 'gcc' failed with exit status 1 Can anyone help?

    Read the article

  • How to get bash to insert ' in the output

    - by ~danieljamesthomas
    Hi everybody, I'm rather new to bash, and somehow just haven't found out what I'm doing wrong here: (this is a small bash script calling my generator) if [ -n $folder ]; then $zorbalocation -q $generator -f -e files=\"$lFiles\" -e folder=\"lFolder\" else $zorbalocation -q $generator -f -e files=\"$lFiles\" -e folder=\".\" fi Now, obviously I want bash to execute these commands, depending on the content of folder. But, for some reason, bash insists on putting apostrophes ( ' ) around files=... and folder =... So, it tries to execute ../../../zorba/build/bin/zorba -q generator.xq -f -e 'files="test.xqlib"' -e 'folder="."' instead of ../../../zorba/build/bin/zorba -q generator.xq -f -e files="test.xqlib" -e folder="." Does anybody know why bash insists on inserting the apostrophes there? A nice day to everyone Danny

    Read the article

  • CMS for custom application

    - by RH
    We are building a custom application using LAMP with P being PHP. We also need to have CMS to manage various aspects of the site. The two options for the CMS are Build a complete custom CMS from scratch Extend an existing open source CMS to fit our needs. This way we can use some of the features out of the box and others we will build ourselves. I would like to get your feedback on the following What are your experiences with option number 2? Which CMS would you recommend that we can further customize and extend for our use? What are the best ways to integrate a custom application with other external CMS?

    Read the article

  • Error after installing scala plugin of netbeans

    - by ghedas
    I installed the scala plugin on my netbeans and followed the instruction of this page: http://wiki.netbeans.org/Scala68v1#Scala_Plugins_for_NetBeans_6.8_v1.x_.28RC2.29 but after it completed correctly step by step, when I make an empty project (Hello world!), the project has an error! The empty project is here: package scalaapplication1 object Main { /** * @param args the command line arguments */ def main(args: Array[String]): Unit = { println("Hello, world!") } } and the console error report is: ...\NetBeansProjects\ScalaApplication2\nbproject\build-impl.xml:403: The following error occurred while executing this line: ...\NetBeansProjects\ScalaApplication2\nbproject\build-impl.xml:236: scalac doesn't support the "fork" attribute Is there any suggestion about it?!

    Read the article

  • How do I specify MSBuild to execute command-line calls in ascii not unicode

    - by Ben L
    I'm attempting to target VC7.1 (visual studio 2003 sp1) from Visual Studio 2010. I'm so close to setting it to work. But when I build, I get this error. 1------ Build started: Project: AnExample, Configuration: Release Win32 ------ 1 Microsoft (R) 32-bit C/C++ Standard Compiler Version 13.10.6030 for 80x86 1 Copyright (C) Microsoft Corporation 1984-2002. All rights reserved. 1 1 cl ÿ_/ 1 1cl : Command line warning D4024: unrecognized source file type 'ÿ_/', object file assumed 1 Microsoft (R) Incremental Linker Version 7.10.6030 1 Copyright (C) Microsoft Corporation. All rights reserved. 1 1 /out:.exe 1 ¦/ 1LINK : fatal error LNK1181: cannot open input file ' ¦/.obj' I know this is unsupported but I thought I'd give it a go. Does anyone know how to force the output from msbuild to be ascii or if this is the problem? There were some errors like this years ago related to the DDK acorrding to some other forums. Thanks.

    Read the article

  • How to avoid manual editing of manifest file

    - by Atara
    My application uses isolated activeX (outer), that depends on another activeX (inner), both are using registration-free-com. The generated manifest file contains only the information for the outer activeX. (probably because vs cannot know that the outer activeX is using inner activeX ) When I re-build my project, I always need to manually add the information for the inner activeX to the manifest file, otherwise the application only shows the outer, without the inner. Is there a way to inform visual studio (2008) that I do not want it to re-generate the manifest file for each build? Will I have such option if I upgrade to VS2010 ? Thanks, Atara

    Read the article

  • asp.net free webcontrol to display crosstab or pivot reports with column and row grouping, subtotals

    - by dev-cu
    Hello, I want to develop some crosstab also know as pivot reports in Asp.net with x-axis and y-axis being dynamics, allowing grouping by row and column, for example: have products in y-axis and date in x-axis having in body number of sells of a given product in a given date, if date in x-axis are years, i want subtotals for each month for a product (row) and subtotals of sells of all products in date (column) I know there are products available to build reports, but i am using Mysql, so Reporting Service is not an option. It's not necessary for the client build additional reports, i think the simplest solution is having a control to display such information and not using crystal report (which is not free) or something more complex, i want to know if is there an available free control to reach my goal. Well, does anybody know a control or have a different idea, thanks in advance.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • How do I give MacPorts privileges?

    - by cojadate
    I tried to install PostgreSQL server development libraries using MacPorts and got the following: Warning: MacPorts running without privileges. You may be unable to complete certain actions (e.g. install). ---> Computing dependencies for postgresql-server-devel ---> Dependencies to be installed: postgresql-devel ---> Building postgresql-devel Error: Target org.macports.build returned: shell command failed Error: The following dependencies failed to build: postgresql-devel Error: Status 1 encountered during processing. To report a bug, see <http://guide.macports.org/#project.tickets> So I guess that means I need to running MacPorts with privileges and try again. Unfortunately I've no idea how to give MacPorts privileges. I'm running OS X 10.6.3

    Read the article

  • 3d user experience with HTML5 and Javascript

    - by chako
    I've to build a 3D user experience with HTML5 and any required JS library which provides such functionality. 3D scene consists of a cylindrical pipe and surface. It has 360 degree rotation and can zoom in and out. As user selects a parameter, specific value of that parameter at various depth of pipe in surface should display. I've searched for HTML5 3d and JS libraries and found three.js could help for this.Also found this useful presentation on HTML 3d engine: http://projects.mariusgundersen.net/OnGameStart/#1 .But as this is my first time with HTML5 3d modeling, how should I initiate to build ? What parameters should be considered ? Which tools and libraries would best fit for such requirements ? I would like to create a 3d model using HTML5 and JS 3d engine as shown in the 1st image.

    Read the article

  • hackage package dependencies and future-proof libraries

    - by yairchu
    In the dependencies section of a cabal file: Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 Should I be doing something like Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 && < 0.3.0 (putting upper limits on versions of packages I depend on) or not? I'll use a real example: my "List" package on Hackage (List monad transformer and class) If I don't put the limit - my package could break by a change in "transformers" If I do put the limit - a user that uses "transformers" but is using a newer version of it will not be able to use lift and liftIO with ListT because it's only an instance of these classes of transformers-0.2.x I guess that applications should always put upper limits so that they never break, so this question is only about libraries: Shall I use the upper version limit on dependencies or not?

    Read the article

  • Image Viewer application, Image processing with Dispaly Data.

    - by Harsha
    Hello All, I am working on Image Viewer application and planning to build in WPF. My Image size are usually larger than 3000x3500. After searching for week, I got sample code from MSDN. But it is written in ATL COM. So I am planning to work and build the Image viewer as follows: After reading the Image I will scale down to my viewer size, viwer is around 1000x1000. Lets call this Image Data as Display Data. Once displaying this data, I will work only this Display data. For all Image processing operation, I will use this display data and when user choose to save the image, I will apply all the operation to original Image data. My question is, Is is ok to use Display data for showing and initial image processing operations.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >