Search Results

Search found 10451 results on 419 pages for 'mr developer'.

Page 361/419 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • Who likes #regions in Visual Studio?

    - by Nicholas
    Personally I can't stand region tags, but clearly they have wide spread appeal for organizing code, so I want to test the temperature of the water for other MS developer's take on this idea. My personal feeling is that any sort of silly trick to simplify code only acts to encourage terrible coding behavior, like lack of cohesion, unclear intention and poor or incomplete coding standards. One programmer told me that code regions helped encourage coding standards by making it clear where another programmer should put his or her contributions. But, to be blunt, this sounds like a load of horse manure to me. If you have a standard, it is the programmer's job to understand what that standard is... you should't need to define it in every single class file. And, nothing is more annoying than having all of your code collapsed when you open a file. I know that cntrl + M, L will open everything up, but then you have the hideous "hash region definition" open and closing lines to read. They're just irritating. My most stead fast coding philosophy is that all programmer should strive to create clear, concise and cohesive code. Region tags just serve to create noise and redundant intentions. Region tags would be moot in a well thought out and intentioned class. The only place they seem to make sense to me, is in automatically generated code, because you should never have to read that outside of personal curiosity.

    Read the article

  • Free solution for automatic updates with .NET/C#?

    - by a2h
    Yes, from searching I can see this has been asked time and time again. Here's a backstory. I'm an individual hobbyist developer with zero budget. A program I've been developing has been in need of constant bugfixes, and me and users are getting tired of having to manually update. Me, because my current solution of Manually FTP to my website Update a file "newest.txt" with the newest version Update index.html with a link to the newest version Hope for people to see the "there's an update" message Have them manually download the update sucks, and whenever I screw up an update, I get pitchforks. Users, because, well, "Are you ever going to implement auto-update?" "Will there ever be an auto-update feature?" Over the past few hours I have looked into: http://windowsclient.net/articles/appupdater.aspx - I can't comprehend the documentation http://www.codeproject.com/KB/vb/Auto_Update_Revisited.aspx - Doesn't appear to support anything other than working with files that aren't in use http://wyday.com/wyupdate/ - wyBuild isn't free, and the file specification is simply too complex. Maybe if I was under a company paying me I could spend the time, but then I may as well pay for wyBuild. http://www.kineticjump.com/update/default.aspx - Ditto. ClickOnce - Workarounds for implementing launching on startup are massive, horrendous and not worth it for such a simple feature. Publishing is a pain; manual FTP and replace of all files is required for servers without FrontPage Extensions. I'm pretty much ready to throw in the towel right now and strangle myself. And then I think about Sparkle...

    Read the article

  • how to change onclick event with jquery?

    - by user550758
    I have create a js file in which i am creating the dynamic table and dynamically changing the click event for the calendar but onclicking the calender image for dynamic generated table, calendar popup in the previous calendar image. Please help me code /***------------------------------------------------------------ * *Developer: Vipin Sharma * *Creation Date: 20/12/2010 (dd/mm/yyyy) * *ModifiedDate ModifiedBy Comments (As and when) * *-------------------------------------------------------------*/ var jq = jQuery.noConflict(); var ia = 1; jq(document).ready(function(){ jq("#subtaskid1").click(function() { if(ia<=10){ var create_table = jq("#orgtable").clone(); create_table.find("input").each(function() { jq(this).attr({ 'id': function(_, id) { return ia + id }, 'name': function(_, name) { return ia + name }, 'value': '' }); }).end(); create_table.find("select").each(function(){ jq(this).attr({ 'name': function(_,name){ return ia + name } }); }).end(); create_table.find("textarea").each(function(){ jq(this).attr({ 'name': function(_,name){ return ia + name } }); }).end(); create_table.find("#f_trigger_c").each(function(){ var oclk = " displayCalendar(document.prjectFrm['"+ ia +"dtSubDate'],'yyyy-mm-dd', this)"; //ERROR IS HERE var newclick = new Function(oclk); jq(this).click(newclick); }).end(); create_table.appendTo("#subtbl"); jq('#maxval').val(ia); ia++; }else{ var ai = ia-1; alert('Only ' + ai + ' SubTask can be insert'); } }); });

    Read the article

  • how can a Win32 App plugin load its DLL in its own directory

    - by Jean-Denis Muys
    My code is a plugin for a specific Application, written in C++ using Visual Studio 8. It uses two DLL from an external provider. Unfortunately, my plugin fails to start because the DLLs are not found (I put them in the same directory as the plugin itself). When I manually move or copy the DLLs to the host application directory, then the plugin loads fine. This moving was deemed unacceptably cumbersome for the end user, and I am looking for a way for my plugin to load its DLLs transparently. What can I do? Relevant details: the host Application plugins are located in a directory mandated by the host application. That directory is not in the DLL search path and I don't control it. The plugin is itself packaged as a subdirectory of the plugin directory, holding the plugin code itself, but also any resource associated with the plugin (eg images, configuration files…). I control what's inside that subdirectory, called a "bundle", but not where it's located. the common plugin installation idiom for that App is for the end user to copy the plugin bundle to the plugin directory. This plugin is a port from the Macintosh version of the plugin. On the Mac there is no issue because each binary contains its own dynamic library search path, which I set as I needed to for my plugin binary. To set that on the Mac simply involves a project setting in the Xcode IDE. This is why I would hope for something similar in Visual Studio, but I could not find anything relevant. Moreover, Visual Studio's help was anything but, and neither was Google. A possible workaround would be for my code to explicitly tell Windows where to find the DLL, but I don't know how, and in any case, since my code is not even started, it hasn't got the opportunity to do so. As a Mac developer, I realize that I may be asking for something very elementary. If such is the case, I apologize, but I have run out of hair to pull out.

    Read the article

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

  • My chance to shape our development process/policy

    - by Matt Luongo
    Hey guys, I'm sorry if this is a duplicate, but the question search terms are pretty generic. I work at a small(ish) development firm. I say small, but the company is actually a fair size; however, I'm only the second full-time developer, as most past work has been organized around contractors. I'm in a position to define internal project process and policy- obvious stuff like SCM and unit-testing. Methodology is outside the scope of the document I'm putting together, but I'd really like to push us in a leaner (and maybe even Agile?) direction. I feel like I have plenty of good practice recommendations, but not enough solid motivation to make my document the spirit guide I'd like it to be. I've separated the document into "principles" and "recommendations". Recommendations have been easy to come up with. Use SCM, strive for 1-step, regularly scheduled builds, unit test first, document as you go... Listing the principles that are supposed to be informing these recommendations, though, has been rough. I've come up with "tools work for us; we should never work for tools" and a hazy clause aimed at our QA (which has been overly manual) that I'd like to read "tedium is the root of all evil". I don't want to miss an opportunity with this document to give us a good in-house start and maybe even push us toward Agile. What principles am I missing?

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • Some web pages (especially Apple documentation) cause heavy CPU usage in Windows IE8

    - by Mark Lutton
    Maybe this belongs in Server Fault instead, but some of you may have noticed this issue (particularly those developing on Mac, using a Windows machine to read the reference material). I posted the same question on a Microsoft forum and got one answer from someone who reproduced the problem, so it's not just my machine. No solution yet. Ever since this month's security updates, I find that many web pages cause the CPU to run at maximum for as long as the web page is visible. This happens in both IE7 and IE8 on at least three different computers (two with Windows XP, one with Vista). Here is one of the pages, running on XP with IE 8: http://learning2code.blogspot.com/2007/11/update-using-subversion-with-xcode-3.html Here is one that does it in Vista with IE8: http://developer.apple.com/iphone/library/documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/Reference/NSString.html You can leave the page open for hours and the CPU is still at high usage. This doesn't happen every time. It is not always reproduceable. Sometimes it is OK the second or third time it loads. In IE7 the high usage is in ieframe.dll, version 7.0.6000.16890. In IE8 the high usage is in iertutil.dll, version 8.0.6001.18806.

    Read the article

  • Eclipse 3.7 Classic Nightmare - ADT Installation

    - by Cal
    I've been trying to install the ADT for Eclipse Classic 3.7 to no avail. From what I've seen on searches, the general consensus seems to be to update the software, but alas I cannot do that, either. BELOW: An example of the error message received when trying to update Eclipse, or when attempting to install from a web location. Some sites could not be found. See the error log for more detail. Unable to read repository at http://download.eclipse.org/eclipse/updates/3.7/content.xml. Cannot assign requested address: JVM_Bind I followed the troubleshooting recommendations of Google/Android's developer section, and attempted to install ADT via archive. BELOW: The resulting error from attempting to install via archive. Cannot complete the install because one or more required items could not be found. Software being installed: Android Development Tools 11.0.0.v201105251008-128486 (com.android.ide.eclipse.adt.feature.group 11.0.0.v201105251008-128486) Missing requirement: Android Development Tools 11.0.0.v201105251008-128486 (com.android.ide.eclipse.adt.feature.group 11.0.0.v201105251008-128486) requires 'org.eclipse.gef 0.0.0' but it could not be found Now, from what I hear, the inability to update/install via Internet seems to be a proxy-related issue, however I don't believe that I'm under any such thing (I'm just using my computer connected to my home network for this). I'm using the most up-to-date versions of anything I can think of (ADT, Eclipse, SDK Tools etc). I'm using Windows 7 Ultimate 64bit, and am using the 64bit version of Eclipse Classic.

    Read the article

  • What do you do when your boss doesn't care about code quality?

    - by Chad Johnson
    My boss (a proprietor) is a developer like me. He comes, however, from a C background and severely lacks knowledge of the benefits of proper object-oriented design. That, or he simply ignores them. So my co-worker developed this feature prototype in a week, and it's not release-ready--at least not from a good code standpoint. It works; it does the job--but it'sa freaking prototype. It's totally not scalable. My boss wants to wow clients and "just get the feature out." I understand that. But, we could take two weeks and finish this shit up, or we could take three and finish this shit up AND do it so that it's scalable. I just KNOW we are going to want to add onto this feature in the coming months, and then, a customer is going to "need it in a week," and so even though we've agreed to refactor when we want to add onto the feature, IT WILL NEVER HAPPEN! This ALWAYS happens. I'm the code quality assurance guy, but my boss seems to see me as a radical and thinks I just waste time, whereas I actually am trying to follow good, known solid design patterns. He just wants his stinking feature though, and he doesn't want to spend the time or money to do things well. He pretty much listens to what I have to say, and then he ultimately just makes the decision to take the shortest path (which cuts corners a lot). I often develop large, important features for our software. THOSE THINGS TAKE TIME! They're not happy with the time it's taken with past projects, though, but the features I've put in all work really damn well and are very scalable. How do you all deal with this kind of situation?

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • HTML Submit button vs AJAX based Post (ASP.NET MVC)

    - by Graham
    I'm after some design advice. I'm working on an application with a fellow developer. I'm from the Webforms world and he's done a lot with jQuery and AJAX stuff. We're collaborating on a new ASP.MVC 1.0 app. He's done some pretty amazing stuff that I'm just getting my head around, and used some 3rd party tools etc. for datagrids etc. but... He rarely uses Submit buttons whereas I use them most of the time. He uses a button but then attaches Javascript to it that calls an MVC action which returns a JSON object. He then parses the object to update the datagrid. I'm not sure how he deals with server-side validation - I think he adds a message property to the JSON object. A sample scenario would be to "Save" a new record that then gets added to the gridview. The user doesn't see a postback as such, so he uses jQuery to disable the UI whilst the controller action is running. TBH, it looks pretty cool. However, the way I'd do it would be to use a Submit button to postback, let the ModelBinder populate a typed model class, parse that in my controller Action method, update the model (and apply any validation against the model), update it with the new record, then send it back to be rendered by the View. Unlike him, I don't return a JSON object, I let the View (and datagrid) bind to the new model data. Both solutions "work" but we're obviously taking the application down different paths so one of us has to re-work our code... and we don't mind whose has to be done. What I'd prefer though is that we adopt the "industry-standard" way of doing this. I'm unsure as to whether my WebForms background is influencing the fact that his way just "doesn't feel right", in that a "submit" is meant to submit data to the server. Any advice at all please - many thanks.

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • Created files on Archos 5 invisible on Windows Xp

    - by user352042
    I am fairly new to Android and this is my first post so I apologise in advance if I am breaking protocol or posting to the wrong board. Please feel free to move this post to somewhere more appropriate if required. I am developing for the 160 Gb Archos 5 Internet tablet. Not ideal as a development platform I know, but customer requirements mean we have no choice. It is running Android 1.6. I have updated the device firmware to the most recent available. Updating the version of Android is not an option at this point. Part of my app's requirement is to write information out to .txt files on the external storage directory so that these can be copied over the USB connection to a Windows XP PC using the Mobile media device (MTP) mode. I have followed all instructions I have come across carefully, eg I check that the storage is available using the technique described at http://developer.android.com/guide/topics/data/data-storage.html#filesExternal. However, althoug the files are created succesfully on the device (I can browse them and open them using the device's File Explorer - they are fine), when I connect the device to a Windows XP computer none of the directories or files I created appear and the size of their parent files suggest they do not exist. I have tried running over the ADB, checked logcat, tried a (signed) release version and even written a second test application which just creates a folder (this behaves the same, ie it creates the folder but this is not visible in Windows Explorer) - nothing anywhere gives me any suggestion as to what the problem might be. If anyone has heard of this before or has any ideas as to what else | could try to fix it please get in touch! We do not have any other devices to test on at the moment, although I hope to remedy this soon, customer permitting.

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • activemerchant PayPalExpress transaction is invalid

    - by Ameya Savale
    I am trying to integrate activemerchant into my ruby on rails application. This is my controller where I get the purchase attirbutes and create a PaypalExpressResponse object def checkout total_as_cents, purchase_params = get_setup_params(Schedule.find(params[:schedule]), request) setup_response = @gateway.setup_purchase(total_as_cents, purchase_params) redirect_to @gateway.redirect_url_for(setup_response.token) end @gateway is my PaypalExpressGateway object which I create using this method in my controller def assign_gateway @gateway = PaypalExpressGateway.new( :login => api_user, :password => api_pass, :signature => api_signature ) end I got the api_user, api_pass, and api_signature values from my developer.paypal.com account, when I logged in for the first time there was already a sandbox user created as a merchant which is where I got the api credentials from. And finally here is my get_setup_params method: def get_setup_params(schedule, request) purchase_params = { :ip => request.remote_ip, :return_url => url_for(:action => 'review', :only_path => false, :sched => schedule.id), :cancel_return_url => register_path, :allow_note => true, :item => schedule.id } return to_cents(schedule.fee), purchase_params end How ever when I click on the checkout button, I get redirected to a sandbox paypal page saying "This transaction is invalid. Please return to the recipient's website to complete your transaction using their regular checkout flow." I'm not sure exactly what's wrong, I think the problem lies in the credentials but don't know why. Any help will be appreciated. One other point, I'm running this in my development environment so I have put this in my config file config.after_initialize do ActiveMerchant::Billing::Base.mode = :test end UPDATE Found out what the problem was, my return cancel url was invalid instead of using register_path, I used url_for(action: "action-name", :only_path => false) this answer helped me Rails ActiveMerchant - Paypal Express Checkout Error even though I wasn't able to see the output of the response like the person has managed to do

    Read the article

  • App no longer working - any ideas

    - by hamishmcn
    I am out of ideas as to why my app has suddenly stopped working - perhaps the collective mind of the SO community can help... Background: I have a large application that has been working up until recently. Now when ever I try and run it I get the error "The application failed to initialize properly (0xc0000005)" This happens before the app gets to _tmain(). It happens in both release and debug builds. I have tried cleaning and rebuilding the projects and rebooted my PC. The call stack just shows entries for kernel32.dll and ntdll.dll The output window shows: First-chance exception at 0x00532c13 in a.exe: 0xC0000005: Access violation reading location 0xabababdb. First-chance exception at 0x7c964ed1 in a.exe: 0xC0000005: Access violation. Unhandled exception at 0x7c964ed1 in a.exe: 0xC0000005: Access violation. Any ideas? Edit: Okay - found the problem - it was dll related my app uses shared dlls a.dll and b.dll (and others) a.dll hardly every changes (and uses b.dll) b.dll was changed by another developer this morning and a.dll was not rebuilt. Depends.exe did not show any missing dlls, however a.dll no longer works because of the change to b.dll

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • How to solve this simple PHP forloop issue?

    - by Londonbroil Wellington
    Here is the content of my flat file database: Jacob | Little | 22 | Male | Web Developer * Adam | Johnson | 45 | Male | President * Here is my php code: <?php $fopen = fopen('db.txt', 'r'); if (!$fopen) { echo 'File not found'; } $fread = fread($fopen, filesize('db.txt')); $records = explode('|', $fread); ?> <table border="1" width="100%"> <tr> <thead> <th>First Name</th> <th>Last Name</th> <th>Age</th> <th>Sex</th> <th>Occupation</th> </thead> </tr> <?php $rows = explode('*', $fread); for($i = 0; $i < count($rows) - 1; $i++) { echo '<tr>'; echo '<td>'.$records[0].'</td>'; echo '<td>'.$records[1].'</td>'; echo '<td>'.$records[2].'</td>'; echo '<td>'.$records[3].'</td>'; echo '<td>'.$records[4].'</td>'; echo '</tr>'; } fclose($fopen); ?> </table> Problem is I am getting the output of the first record repeated twice instead of 2 records one for Jacob and one for Adam. How to fix this?

    Read the article

  • Getting auth token for dropbox account from accountmanager in android

    - by user1490880
    I am trying to get auth token for a dropbox account configured in device from account manager. I am using accountManager.getAuthToken(account, "DROPBOX",null,Hello.this, new GetAuthTokenCallback(), null);//account" is dropbox account I am seeing a Allow/Deny page. I click on Allow, but the callback is not getting invoked at all and i dont get the auth token. I got the authtoken for a google account with this(with a different authtokentype). What i am missing. I am not sure about the authTokenType parameter for dropbox. Also are there any other parameter specific for dropbox like the bundle parameter that i am missing. Is this way possible for dropbox? Check below for the function parameters public AccountManagerFuture<Bundle> getAuthToken (Account account, String authTokenType, Bundle options, Activity activity, AccountManagerCallback<Bundle> callback, Handler handler) Link: http://developer.android.com/reference/android/accounts/AccountManager.html UPDATE I assume since we are able to create a dropbox account in android Accounts and Sync(Settings), there must be a dropbox authenticator that has all the functions in AbstractAccountAuthenticator implemented including getAuthToken(). So dropbox should support giving auth token i think. Also dropbox uses oauth1, whereas account manager uses outh 2.0. So is this an issue.Can anyone comment on this?

    Read the article

  • How to fix "OutOfMemoryError: java heap space" while compiling MonoDroid App in MonoDevelop

    - by Rodja
    When I try to compile one of my projects, I recently get the following error: Tool /usr/bin/java execution started with arguments: -jar /Applications/android-sdk-mac_x86/platform-tools/lib/dx.jar --no-strict --dex --output=obj/Debug/android/bin/classes.dex obj/Debug/android/bin/classes /Developer/MonoAndroid/usr/lib/mandroid/platforms/android-8/mono.android.jar FlurryAnalytics/Jars/FlurryAgent.jar Jars/android-support-v4.jar UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at com.android.dx.rop.code.RegisterSpecSet.<init>(RegisterSpecSet.java:49) at com.android.dx.rop.code.RegisterSpecSet.mutableCopy(RegisterSpecSet.java:383) at com.android.dx.ssa.LocalVariableInfo.mutableCopyOfStarts(LocalVariableInfo.java:169) at com.android.dx.ssa.LocalVariableExtractor.processBlock(LocalVariableExtractor.java:104) at com.android.dx.ssa.LocalVariableExtractor.doit(LocalVariableExtractor.java:90) at com.android.dx.ssa.LocalVariableExtractor.extract(LocalVariableExtractor.java:56) at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:50) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:99) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:73) at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:273) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:134) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:87) at com.android.dx.command.dexer.Main.processClass(Main.java:487) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:459) at com.android.dx.command.dexer.Main.access$400(Main.java:67) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:398) at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:131) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:109) at com.android.dx.command.dexer.Main.processOne(Main.java:422) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:333) at com.android.dx.command.dexer.Main.run(Main.java:209) at com.android.dx.command.dexer.Main.main(Main.java:174) at com.android.dx.command.Main.main(Main.java:91) Other projects build as expected. I think I need to increase the heap size for this java build step? But how?

    Read the article

  • How to develop for iphone application about "retrieving database file on web"?

    - by coverboy
    Hi...all experts! I'm a newbie to iphone developer. Well, currently, I'm developing iphone for Location Based Service. That application need to have these functions. 1. hierarchical tree-view on navigation bar. 2. list up page 3. detail page for example, Let's say. I have top category like "Restaurant, Hotel, Gift Shop" Second level "New York, LA, London,....." Third Level displays all Data with 1 photo. Fourth Level displays Detail of that "Restaurant or Hotel, Gift shop, ..." So, My Only Interest is "How to retrieve the data from remote database server. not using iphone local one." Because, that locations, and shops should be updated frequently, (you know some shops closed, new shops opens.) So, till now, I figured out that using XML to retrieve data. However, using XML is the most effective way to implement? Is there any other way to accomplish this work? You know, transferring XML data via 3G Network is really slow. XML file have more bytes than plist file. I'm currently a beginner of iphone development. So, please help me find a right way!! And, one more question, if I use xml way. Is it possible to Paging? (First page 10 lists up, below that more button...) well, you might guess each category have hundreds of shops!!

    Read the article

  • Definition of the job titles involved in a software development process.

    - by Rafael Romão
    I have seen many job titles for people involved in a software development process, but never found a consensus about they mean. I know many of them are equivalent, and found some other questions about that here in SO, but I would like to know your definitions and comments about them. I want not only to know if there is really a consensus, but also to know if what I suppose to be a Software Architect, is really a Software Architect, and so on. The job titles I mean are: Developer; System Analyst; Programmer; Analyst Programmer; Software Engineer; Software Architect; Designer; Software Designer; Business Manager; Business Analyst; Program Manager; Project Manager; Development Manager; Tester; Support Analyst; Please, feel free to add more titles to this list in your answers. It would be very helpful.

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >