Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 52/1524 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Good text editors or viewers for large log files

    - by Kristopher Johnson
    Log files and other textual data files are often tens or hundreds of megabytes in size, and some editors choke when you try to open something so large. What are some good applications for viewing large files? Bonus points for apps that can open compressed files, search for things with regular expressions, parse output lines, etc.

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • Use Excel Color Scale Formatting with Text

    - by stumpylog
    I use Excel sheets to track the status of tasks through a set of discrete statuses. I'd like to be able to format these automatically, with the start being red, the end being green and progressing through the combination colors in the middle. Status1 (Red) Status2 (More Red than Green) Status3 (More Green than Red) Status4 (Green) The "Color Scales" option under Conditional Formatting seems like it could be made to work, but it wants numbers. So, my question, can it be done? Using conditional formatting or other formulas to achieve the desired affect?

    Read the article

  • methods for preventing large scale data scraping from REST api

    - by Simon Kenyon Shepard
    I know the immediate answer to this is going to be there is no 100% reliable method of doing this. But I'd like to create a question that details the different possibilities, the difficulty of implementing them and success rates. I would like to go from simple software ip/request speed analysis to high end sophisticated soft/hardware tools, e.g. neural networks. With a goal of predicting and preventing bogus requests and attempts to scrape the service. Many Thanks.

    Read the article

  • Windows 2008 R2 large file copy causes Hyper-V Manager to stop responding

    - by maryeileen
    I'm using the EXPORT feature in Hyper-V to move a large Virtual Machine (VM) over a 1GB network from a Windows 2008 to a Windows 2008 R2 box (200GB) and its so intense that I get the following icon on my destination Hyper-V manager: Is this expected? Is there another way to get large file across the network and minimize this intense I/O effect? Anyones else ever seen that Do Not Enter sign? The other VMs are functional but slow, but I'm guessing that is expected.

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • Designing a web application to scale

    - by Fahim Akhter
    Hi, While designing a web application facebook application to be precise. Which can spike and increase rapidly because of it vitality and is right intensive. What point should one keep in mind while designing the DB. For example what things should I leave room for if I need to shard or have a Master/Slave combination later (with memcache) Considering I use Relational Database with mySQL

    Read the article

  • Server monitoring for medium scale UNIX network

    - by nbartolomeo
    I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes.

    Read the article

  • deploy git project and permission issue

    - by nixer
    I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook. I have next hook content echo "starting deploy..." WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/" GIT_WORK_TREE=$WWW_ROOT git checkout -f exec chmod -R 750 $WWW_ROOT exec chown -R www-data:www-data $WWW_ROOT echo "finished" hook can't be finished without any error message. chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted means that git has no enough right to make it. The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data. What changes I should do, to work properly post-receive hook in git, and redmine with repository ? what I did, is: # id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite) # id gitolite uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data) does not helped. I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path) what should I do ?

    Read the article

  • Saving table yields "Record is too large" in Access

    - by C. Ross
    I have an access database that I gave to a user (shame on my head). They were having trouble with some data being too long, so I suggested changing several text fields to memo fields. I tried this in my copy and it worked perfectly, but when the user tries it they get a "Record is too large" messagebox on saving the modified table design. Obviously the same record is not too large in my database, why would it be in theirs?

    Read the article

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • MySQL reclaim index space after large delete?

    - by cdunn
    After performing a large delete in MySQL, I understand you need to run a NULL ALTER to reclaim disk space, is this also true for reclaiming index space? We have tables using 10G of index space and have deleted/archived large chunks of this data and unsure if we need to rebuild the table in order to decrease the size of the index. Can anyone offer any advice? We are trying to avoid rebuilding the table since it would take quite awhile and lock the table. Thanks!

    Read the article

  • Avoiding circular project/assembly references in Visual Studio with statically typed dependency conf

    - by svnpttrssn
    First, I want to say that I am not interested in debating about any non-helpful "answers" to my question, with suggestions to putting everything in one assembly, i.e. there is no need for anyone to provide webpages such as the page titled with "Separate Assemblies != Loose Coupling". Now, my question is if it somehow (maybe with some Visual Studio configuration to allow for circular project dependencies?) is possible to use one project/assembly (I am here calling it the "ServiceLocator" assembly) for retrieving concrete implementation classes, (e.g. with StructureMap) which can be referred to from other projects, while it of course is also necessary for the the ServiceLocator itself to refer to other projects with the interfaces and the implementations ? Visual Studio project example, illustrating the kind of dependency structure I am talking about: http://img10.imageshack.us/img10/8838/testingdependencyinject.png Please note in the above picture, the problem is how to let the classes in "ApplicationLayerServiceImplementations" retrieve and instantiate classes that implement the interfaces in "DomainLayerServiceInterfaces". The goal is here to not refer directly to the classes in "DomainLayerServiceImplementations", but rather to try using the project "ServiceLocator" to retrieve such classes, but then the circular dependency problem occurrs... For example, a "UserInterfaceLayer" project/assembly might contain this kind of code: ContainerBootstrapper.BootstrapStructureMap(); // located in "ServiceLocator" project/assembly MyDomainLayerInterface myDomainLayerInterface = ObjectFactory.GetInstance<MyDomainLayerInterface>(); // refering to project/assembly "DomainLayerServiceInterfaces" myDomainLayerInterface.MyDomainLayerMethod(); MyApplicationLayerInterface myApplicationLayerInterface = ObjectFactory.GetInstance<MyApplicationLayerInterface>(); // refering to project/assembly "ApplicationLayerServiceInterfaces" myApplicationLayerInterface.MyApplicationLayerMethod(); The above code do not refer to the implementation projects/assemblies ApplicationLayerServiceImplementations and DomainLayerServiceImplementations, which contain this kind of code: public class MyApplicationLayerImplementation : MyApplicationLayerInterface and public class MyDomainLayerImplementation : MyDomainLayerInterface The "ServiceLocator" project/assembly might contain this code: using ApplicationLayerServiceImplementations; using ApplicationLayerServiceInterfaces; using DomainLayerServiceImplementations; using DomainLayerServiceInterfaces; using StructureMap; namespace ServiceLocator { public static class ContainerBootstrapper { public static void BootstrapStructureMap() { ObjectFactory.Initialize(x => { // The two interfaces and the two implementations below are located in four different Visual Studio projects x.ForRequestedType<MyDomainLayerInterface>().TheDefaultIsConcreteType<MyDomainLayerImplementation>(); x.ForRequestedType<MyApplicationLayerInterface>().TheDefaultIsConcreteType<MyApplicationLayerImplementation>(); }); } } } So far, no problem, but the problem occurs when I want to let the class "MyApplicationLayerImplementation" in the project/assembly "ApplicationLayerServiceImplementations" use the "ServiceLocator" project/assembly for retrieving an implementation of "MyDomainLayerInterface". When I try to do that, i.e. add a reference from "MyApplicationLayerImplementation" to "ServiceLocator", then Visual Studio complains about circular dependencies between projects. Is there any nice solution to this problem, which does not imply using refactoring-unfriendly string based xml-configuration which breaks whenever an interface or class or its namespace is renamed ? / Sven

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • Code bases for desktop and mobile versions of the same app

    - by Code-Guru
    I have written a small Java Swing desktop application. It seems like a natural step to port it to Android since I am interested in learning how to program for that platform. I believe that I can reuse some of my existing code base. (Of course, exactly how much reuse I can get out of it will only be determined as I start coding the Android app.) Currently I am hosting my Java Swing app on Sourceforge.net and use Git for version control. As I start creating the Android app, I am considering two options: Add the Android code to my existing repository, creating separate directories and Java packages for the Android-specific code and resources. Create a new Sourceforge project (or even host a new one) and creating a new Git repository. a. With a new repository, I can simply add the files from my original project that I will reuse. (I don't particularly like this option as it will be difficult to modify both copies of the same file in both repositories.) b. Or I can branch the original repository. This adds the difficulty of merging changes of shared source files. Mostly I am trying to decide between choices 1. and 2b. If I'm going to branch the existing repository, what advantages are there to hosting it as a separate SF project (or even using another OSS hosting service) as opposed to keeping all my source code in the current SF project?

    Read the article

  • Blackberry Apps - Importing a code-signed jar into an application project

    - by Eric Sniff
    Hi everyone, I'm working on a library project that Blackberry Java developers can import into their projects. It uses protected RIM APIs which require that it be code-signed, which I have done. But, I can't get my Jar imported and working with a simple helloWorld app. I'm using the eclipse plug-in Blackberry-JDE. Here is what I have tried: First: Building myLibProject with BlackBerry_JDE_PluginFull_1.0.0.67 into a JAR, signing it and importing it into a BlackBerry_JDE_PluginFull_1.0.0.67 application project -- I get a class not found error, while compiling the application project. Next: I imported myLibProject into an BlackBerry_JDE_PluginFull_1.1.1.* library project, built it into a jar, signed it and imported it into a BlackBerry_JDE_PluginFull_1.1.1.* application project. It built this time, but while loading up the simulator to test it I get the following error ( Access violation reading from 0xFFFFFFC ) before the simulator can loadup and it crashs the simulator. Other stuff I've tried: I also tried importing the jar into it's own project and having the HelloWorld app project reference that project. If I include the src in my application project it works fine... But Im looking for a way to deploy this as compiled code. Any Ideas? Or help?

    Read the article

  • Can Foswiki be used as a distributed Redmine replacement? [closed]

    - by Tobias Kienzler
    I am quite familiar with and love using git, among other reasons due to its distributed nature. Now I'd like to set up some similarly distributed (FOSS) Project Management software with features similar to what Redmine offers, such as Issue & time tracking, milestones Gantt charts, calendar git integration, maybe some automatic linking of commits and issues Wiki (preferably with Mathjax support) Forum, news, notifications Multiple Projects However, I am looking for a solution that does not require a permanently accesible server, i.e. like in git, each user should have their own copy which can be easily synchronized with others. However it should be possible to not have a copy of every Project on every machine. Since trac uses multiple instances for multiple projects anyway, I was considering using that, but I neither know how well it adapts to simply giting the database itself (which would be be easiest way to handle the distribution due to git being used anyway), nor does it include all of Redmine's feature. After checking http://www.wikimatrix.org for Wikis with integrated tracking system and RCS support, and filtering out seemingly stale project, the choices basically boil down to Foswiki, TWiki and Ikiwiki. The latter doesn't seem to offer as many usability features, and in the TWiki vs Foswiki issue I tend to the latter. Finally, there is Fossil, which starts from the other end by attempting to replace git entirely and tracking itself. I am however not too comfortable with the thought of replacing git, and Fossil's non-SCM features don't seem to be as developed. Now before I invest too much time when someone else might already have tried this, I basically have two questions: Are there crucial features of Project Management software like Redmine that Foswiki does not provide even with all the extensions available? How to set Foswiki up to use git instead of the perl RcsLite?

    Read the article

  • Delphi: Autoscale TEdit based on text length does not work when removing chars

    - by pr0wl
    Hello. I have an input edit field where the user can enter data. I want the box width to be at least 191px (min) and maximum 450px (max). procedure THauptform.edtEingabeChange(Sender: TObject); begin // Scale if Length(edtEingabe.Text) > 8 then begin if Hauptform.Width <= 450 then begin verschiebung := verschiebung + 9; // The initial values like 'oldedtEingabeWidth' are global vars. edtEingabe.Width := oldedtEingabeWidth + verschiebung; buDo.Left := oldbuDoLeft + verschiebung; Hauptform.Width := oldHauptformWidth + verschiebung; end; end; end; This works for ENTERING text. But when I delete one char, it does not scale back accordingly.

    Read the article

  • How to adopt scrum agile methodology for a small .Net team

    - by Thabo
    I am working on a small product based company developing .Net applications. There is a small team with 5-6 developers. I am a person responsible for planning everything. But my primary role is Software developer. Now our current project is very unstable because of poor organization. Today my boss called me and told to submit a report about required resources, appropriate methodology, required man power and their salary scales to make the current project success. I know I don’t have enough organization skills and I need to go deep in my programming skills. So I need to focus only in the development. So I can’t manage the project anymore. Now I am searching some other ways to make ongoing development success. My questions are What is the suitable agile methodology to my team? Is Scrum is suitable for above mentioned scenario? If we adopt Scrum, what we have to do next? (I think hiring new one to manage the project is more suitable. So we have to get Scrum master and some other developers.) Are there any resources (books, Blogs and etc) to get some tips and advices to solve this problem? If Scrum is not a suitable methodology for our scenario, what else can be more suitable methodology to adopt? Can anyone give a good solution for my problem?

    Read the article

  • What can be the cause of new bugs appearing somewhere else when a known bug is solved?

    - by MainMa
    During a discussion, one of my colleagues told that he has some difficulties with his current project while trying to solve bugs. "When I solve one bug, something else stops working elsewhere", he said. I started to think about how this could happen, but can't figure it out. I have sometimes similar problems when I am too tired/sleepy to do the work correctly and to have an overall view of the part of the code I was working on. Here, the problem seems to be for a few days or weeks, and is not related to the focus of my colleague. I can also imagine this problem arising on a very large project, very badly managed, where teammates don't have any idea of who does what, and what effect on other's work can have a change they are doing. This is not the case here neither: it's a rather small project with only one developer. It can also be an issue with old, badly maintained and never documented codebase, where the only developers who can really imagine the consequences of a change had left the company years ago. Here, the project just started, and the developer doesn't use anyone's codebase. So what can be the cause of such issue on a fresh, small-size codebase written by a single developer who stays focused on his work? What may help? Unit tests (there are none)? Proper architecture (I'm pretty sure that the codebase has no architecture at all and was written with no preliminary thinking), requiring the whole refactoring? Pair programming? Something else?

    Read the article

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • Should one reject over-scoped projects?

    - by Little Child
    I spoke to my first potential client today and he told me about the requirements of his project - an Android app. He is a well-known designer / photographer in my country and now wants me to "convert the website into an app, custom-tailored". So the requirements, details stripped out, are as follows: eCommerce Aggregating all his content like videos, blogs, tweets, etc. into the app Live streaming any of his studio demos Augmented reality. So that people can see what his painting will look like on their wall before they buy it Taxi Sharing Now, for a freelance project, it seems too over-scoped. I am not saying that I cannot do it. I can. But let me be realistic: There is a steep learning curve when it comes to VR. I am not a tester. I have never white-box tested my own apps. I always black-box test. Since he is a renowned artist, something short of perfect might harm his public image So, I asked him for 2 weeks' worth of time before I give him the final answer. Now knowing whom to consult for advise, I am posting the question here. Although interesting and personally challenging, I am split-minded about accepting a project like this. I will be the only developer for this. Should one reject a project that seems to be over-scoped for one's own abilities?

    Read the article

  • AS3 How to center MC + change background color?

    - by Jennifer Heidelberg
    Hello everyone, I am quite new to AS3 and I have never worked with classes, so I am encountering a couple of problems. I'd like to center a movieclip, have it so that it doesn't scale. And then I'd like to add a background color that stays there no matter how I scale the browser. Can someone please explain me this in babysteps? Since I don't know how to implement a class and make it work with my fla. Thank you so much! J.

    Read the article

  • Proper library for enums

    - by Bobson
    I'm trying to refactor some code such that the display is separate from the implementation, and I'm not sure where to put the existing enums. My project is currently structured as follows: Utilities RemoteData (Depends on: Utilities) LocalData (Depends on: RemoteData, Utilities) RemoteWeb (Depends on: RemoteData, Utilities) LocalWeb (Depends on: RemoteData, LocalData, Utilities) I'm now trying to add "ViewLibrary (Depends on: Utilities)" to this list, and then adding it as a new dependency to both RemoteWeb and LocalWeb. It will contain a set of interfaces which the other two projects will implement, use to populate the view, and then consume the result. There's an enum which is currently used in all the projects except Utilities. It thus lives in the RemoteData project, because everything else depends on it. But this new ViewLibrary won't depend on either data project. So how will it know about this enum? Some options I see: Create a new project just for shared enum values. Add it to Utilities, even though it is related to data. Define it a second time in ViewLibrary, and require both RemoteWeb and LocalWeb to convert the one type into the other when they access the shared views. Add a dependency on RemoteData to the ViewLibrary, even though it's supposed to be independent of data-source. Are there any better options? Is this structure flawed to begin with?

    Read the article

  • including pre-built java classes into an android project

    - by moonlightcheese
    i'm trying to include a maven java project into my android project. the maven project is the greader-unofficial project which allows developers access to google reader accounts, and handles all of the http transactions and URI/URL building, making grabbing feeds and items from google reader transparent to the developer. the project is available here: http://code.google.com/p/greader-unofficial/ the code is originally written for the standard jdk and uses classes from java.net that are not a part of the standard Android SDK. i actually tried to manually resolve all dependencies and ran into a problem when i got as far as including com.sun.syndication pieces required by the class be.lechtitseb.google.reader.api.util.AtomUtil.java... some of the classes in java.net that are in the standard jdk (i'm using 1.6) are not in the Android SDK. in addition, resolving all of these dependencies manually is just ridiculous when i'm compiling a maven project that should be pretty simple. however, i can use maven to compile the sources with no issue. how can i include this maven project, which is dependent on the complete jdk, into my android project in such a way that it will compile so that i can access the GoogleReader class from my android project? and for the record, i don't have the expertise to rewrite this entire api to work with the standard Android SDK.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >