Search Results

Search found 25836 results on 1034 pages for 'solution evangelist'.

Page 32/1034 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • please give me a solution

    - by user327832
    here is the code i have written so far but ended up giving me error import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; public class Main { public static void main(String[] args) throws Exception { File file = new File("c:\\filea.txt"); InputStream is = new FileInputStream(file); long length = file.length(); System.out.println (length); bytes[] bytes = new bytes[(int) length]; try { int offset = 0; int numRead = 0; while (numRead >= 0) { numRead = is.read(bytes); } } catch (IOException e) { System.out.println ("Could not completely read file " + file.getName()); } is.close(); Object[] see = new Object[(int) length]; see[1] = bytes; System.out.println ((String[])see[1]); } }

    Read the article

  • Cakephp, an elegant solution to quantities?

    - by Smickie
    Hi, I have a shopping cart system in Cakephp, this table has all your usual maguffins: user_ids, product_ids, option_lists ect. It also has quantity. I currently have some awful nested loops to check if the record is the same as any in there, if so add one to the quantity. If not add a new cart item. This loop has to check associated list items and product options, so it goes quite deep. What I'm wondering is if there is a more elegant way of checking to see if two cart items in a database are similar (everything except for quantity). Cheers!

    Read the article

  • EXC_MEMORY_ACCESS when trying to delete from Core Data ($cash solution)

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements items from the xml data. This method looks like this: -(void) emptyDataContext { NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; DebugLog(@"ERROR: %@",error); DebugLog(@"RETRIEVED: %@", conditions); [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } // Update the data model effectivly removing the objects we removed above. //NSError *error; if (![managedObjectContext save:&error]) { DebugLog(@"%@", [error domain]); } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the proceeding with the parse & build. All is well until its time to delete the core data objects. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. Captured errors return null. If I log the 'conditions' array I get a list of NSManagedObjects on the first run. On the second this log request causes a crash exactly as the deleteObject call does. I have a feeling it is something very simple I'm missing or not doing correctly to cause this behavior. The data works great on my tableviews - its only when trying to update I get the crashes. I have spent days & days on this trying numerous alternative methods. Whats left of my hair is falling out. I'd be willing to ante up some cash for anyone willing to look at my code and see what I'm doing wrong. Just need to get past this hurdle. Thanks in advance for the help!

    Read the article

  • best content on how to deploy and share a VSTO solution

    - by ooo
    with the push to leverage visual studio and dotnet with office based solutions, especially excel, where is the best article or information on how having office sheet with additional binaries and assemblies is sharable. Do this external code get packaged with the spreadsheet what if people start emailing the spreadsheet around. Is there any overhead of this additional assemblies. Is there risk of the binaries getting detached from the spreadsheet It seems like microsoft has been pushing VSTO for over 5 years now but you read lots of mixed reviews and issues. Are we at the point where companies that do large VBA excel solutions can fully migrate over to dotnet without any real worries?

    Read the article

  • Build Event Macros for Other Projects in the Solution

    - by Adam Driscoll
    Is it possible to reference other projects' properties via a macro within a build event? For example: "Tool1" outputs to directory ..\..\bin\Release "Component1" uses "Tool1" in its post-buildevent To get to "Tool1", "Component1"'s project must do something like $(SolutionDir)bin\Release This requires that Tool1 always output to ..\..\bin\Release. If this is changed this breaks the other project. I know there is no indication to this within the macro list but is there a way to reference another project? Maybe like $(OtherProject.TargetDir)... I know WIX has a similar syntax [$(var.OtherProject.TargetDir)] but I think that may be a different mechanism.

    Read the article

  • Javascript setInterval and `this` solution

    - by Michael
    I need to access this from my setInterval handler prefs: null, startup : function() { // init prefs ... this.retrieve_rate(); this.intervalID = setInterval(this.retrieve_rate, this.INTERVAL); }, retrieve_rate : function() { var ajax = null; ajax = new XMLHttpRequest(); ajax.open('GET', 'http://xyz.com', true); ajax.onload = function() { // access prefs here } } How can I access this.prefs in ajax.onload ?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • Setting Environment Variables For NMAKE Before Building A 'Makefile Solution'

    - by John Dibling
    I have an MSVC Makefile Project in which I need to set an environment variable before running NMAKE. For x64 builds I needs to set it to one value, and for x86 builds I need to set it to something else. So for example, when doing a build I would want to SET PLATFORM=win64 if I'm building a 64-bit compile, or SET PLATFORM=win32 if I'm building 32-bit. There does not appear to be an option to set environment variables or add a pre-build even for makefile projects. How do I do this? EDIT: Running MSVC 2008

    Read the article

  • Infor PM (Business Intelligence solution)

    - by Andrew
    We are currently implementing the commercial Infor PM (Performance Management) package as a business intelligence tool. Infor PM website It is apparently used by over 1,000 companies around the world, but I have found scant information about it on the net except for what's on their own website. It covers the whole range of data warehousing and BI functions with: an OLAP environment an ETL tool a report writer (called Application Studio) an add-on to Excel to connect to the data in the cubes through a pivot table etc Does anyone have any experience with using this package? How does it compare to the big players in BI (Cognos, Microsoft SSAS, Business Objects, etc). Any pitfalls I should know about? On the other hand, does it do anything better than its competitors?

    Read the article

  • Clean solution to this ruby iterator trickiness?

    - by mstksg
    k = [1,2,3,4,5] for n in k puts n if n == 2 k.delete(n) end end puts k.join(",") # Result: # 1 # 2 # 4 # 5 # [1,3,4,5] # Desired: # 1 # 2 # 3 # 4 # 5 # [1,3,4,5] This same effect happens with the other array iterator, k.each: k = [1,2,3,4,5] k.each do |n| puts n if n == 2 k.delete(n) end end puts k.join(",") has the same output. The reason this is happening is pretty clear...Ruby doesn't actually iterate through the objects stored in the array, but rather just turns it into a pretty array index iterator, starting at index 0 and each time increasing the index until it's over. But when you delete an item, it still increments the index, so it doesn't evaluate the same index twice, which I want it to. This might not be what's happening, but it's the best I can think of. Is there a clean way to do this? Is there already a built-in iterator that can do this? Or will I have to dirty it up and do an array index iterator, and not increment when the item is deleted?

    Read the article

  • Apache rails beta site access solution

    - by par
    I'm building an ror site and have been asked by to put a temporary access restriction on it. All that's needed is a general access restriction and common access info which can be emailed to invited beta users. The site is deployed on an apache server (on a mac) using passenger. I'm wondering what solutions there are?

    Read the article

  • Best solution to wait for all ajax callbacks to be executed

    - by glaz666
    Hi! Imagine we have to sources to be requested by ajax. I want to perform some actions when all callbacks are triggered. How this can be done besides this approach: (function($){ var sources = ['http://source1.com', 'http://source2.com'], guard = 0, someHandler = function() { if (guard != sources.length) { return; } //do some actions }; for (var idx in sources) { $.getJSON(sources[idx], function(){ guard++; someHandler(); }) } })(jQuery) What I don't like here is that in this case I can't handle response failing (eg. I can't set timeout for response to come) and overall approach (I suppose there should be a way to use more power of functional programming here) Any ideas? Regards!

    Read the article

  • solution for updating table based on data from another table

    - by I__
    i have 2 tables in access this is what i need: 1. if the PK from table1 exists in table2, then delete the entire record with that PK from table2 and add the entire record from table1 into table2 2. if the PK does not exist then add the record i need help with both the sql statement and the VBA i guess the VBA should be a loop, going through every record in table1. inside the loop i should have the select statement

    Read the article

  • Javascript timezone solution needed

    - by user198729
    I have unix timestamps from time zone X which is not known, the current timestamp(now()) in TZ X is known 1275143019, how to approach a javascript function so that it can generate the datetime in the users current TZ in the format 2010-05-29 15:32:35 ?

    Read the article

  • What's the solution for this gtk warning?

    - by Runner
    GtkWidget *textview; ... textview = gtk_text_view_new (); ... buffer = gtk_text_view_get_buffer (textview); At the last line I pasted I got this warning: warning C4133: 'function' : incompatible types - from 'GtkWidget *' to 'GtkTextView *' How can I fix that?

    Read the article

  • online backup solution with api for desktop

    - by user161179
    I made a small backup application that simply creates an archive out specified files and folders. Now I need an online service to backup that online. Which service can i use that can be integrated into my app ? Options I considered: dropbox is ideal, but they have all but abandoned the desktop. skydrive has no api. I couldn't find any free reliable backup service that uses ftp . anything else ? it should provide 1-2 gb of free space and be reasonably reliable. Thanks My app is in C#, but can be ported to any other language as well..

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >