Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 32/1030 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • please give me a solution

    - by user327832
    here is the code i have written so far but ended up giving me error import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; public class Main { public static void main(String[] args) throws Exception { File file = new File("c:\\filea.txt"); InputStream is = new FileInputStream(file); long length = file.length(); System.out.println (length); bytes[] bytes = new bytes[(int) length]; try { int offset = 0; int numRead = 0; while (numRead >= 0) { numRead = is.read(bytes); } } catch (IOException e) { System.out.println ("Could not completely read file " + file.getName()); } is.close(); Object[] see = new Object[(int) length]; see[1] = bytes; System.out.println ((String[])see[1]); } }

    Read the article

  • Build Event Macros for Other Projects in the Solution

    - by Adam Driscoll
    Is it possible to reference other projects' properties via a macro within a build event? For example: "Tool1" outputs to directory ..\..\bin\Release "Component1" uses "Tool1" in its post-buildevent To get to "Tool1", "Component1"'s project must do something like $(SolutionDir)bin\Release This requires that Tool1 always output to ..\..\bin\Release. If this is changed this breaks the other project. I know there is no indication to this within the macro list but is there a way to reference another project? Maybe like $(OtherProject.TargetDir)... I know WIX has a similar syntax [$(var.OtherProject.TargetDir)] but I think that may be a different mechanism.

    Read the article

  • Javascript setInterval and `this` solution

    - by Michael
    I need to access this from my setInterval handler prefs: null, startup : function() { // init prefs ... this.retrieve_rate(); this.intervalID = setInterval(this.retrieve_rate, this.INTERVAL); }, retrieve_rate : function() { var ajax = null; ajax = new XMLHttpRequest(); ajax.open('GET', 'http://xyz.com', true); ajax.onload = function() { // access prefs here } } How can I access this.prefs in ajax.onload ?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • Setting Environment Variables For NMAKE Before Building A 'Makefile Solution'

    - by John Dibling
    I have an MSVC Makefile Project in which I need to set an environment variable before running NMAKE. For x64 builds I needs to set it to one value, and for x86 builds I need to set it to something else. So for example, when doing a build I would want to SET PLATFORM=win64 if I'm building a 64-bit compile, or SET PLATFORM=win32 if I'm building 32-bit. There does not appear to be an option to set environment variables or add a pre-build even for makefile projects. How do I do this? EDIT: Running MSVC 2008

    Read the article

  • Infor PM (Business Intelligence solution)

    - by Andrew
    We are currently implementing the commercial Infor PM (Performance Management) package as a business intelligence tool. Infor PM website It is apparently used by over 1,000 companies around the world, but I have found scant information about it on the net except for what's on their own website. It covers the whole range of data warehousing and BI functions with: an OLAP environment an ETL tool a report writer (called Application Studio) an add-on to Excel to connect to the data in the cubes through a pivot table etc Does anyone have any experience with using this package? How does it compare to the big players in BI (Cognos, Microsoft SSAS, Business Objects, etc). Any pitfalls I should know about? On the other hand, does it do anything better than its competitors?

    Read the article

  • Clean solution to this ruby iterator trickiness?

    - by mstksg
    k = [1,2,3,4,5] for n in k puts n if n == 2 k.delete(n) end end puts k.join(",") # Result: # 1 # 2 # 4 # 5 # [1,3,4,5] # Desired: # 1 # 2 # 3 # 4 # 5 # [1,3,4,5] This same effect happens with the other array iterator, k.each: k = [1,2,3,4,5] k.each do |n| puts n if n == 2 k.delete(n) end end puts k.join(",") has the same output. The reason this is happening is pretty clear...Ruby doesn't actually iterate through the objects stored in the array, but rather just turns it into a pretty array index iterator, starting at index 0 and each time increasing the index until it's over. But when you delete an item, it still increments the index, so it doesn't evaluate the same index twice, which I want it to. This might not be what's happening, but it's the best I can think of. Is there a clean way to do this? Is there already a built-in iterator that can do this? Or will I have to dirty it up and do an array index iterator, and not increment when the item is deleted?

    Read the article

  • Apache rails beta site access solution

    - by par
    I'm building an ror site and have been asked by to put a temporary access restriction on it. All that's needed is a general access restriction and common access info which can be emailed to invited beta users. The site is deployed on an apache server (on a mac) using passenger. I'm wondering what solutions there are?

    Read the article

  • Best solution to wait for all ajax callbacks to be executed

    - by glaz666
    Hi! Imagine we have to sources to be requested by ajax. I want to perform some actions when all callbacks are triggered. How this can be done besides this approach: (function($){ var sources = ['http://source1.com', 'http://source2.com'], guard = 0, someHandler = function() { if (guard != sources.length) { return; } //do some actions }; for (var idx in sources) { $.getJSON(sources[idx], function(){ guard++; someHandler(); }) } })(jQuery) What I don't like here is that in this case I can't handle response failing (eg. I can't set timeout for response to come) and overall approach (I suppose there should be a way to use more power of functional programming here) Any ideas? Regards!

    Read the article

  • solution for updating table based on data from another table

    - by I__
    i have 2 tables in access this is what i need: 1. if the PK from table1 exists in table2, then delete the entire record with that PK from table2 and add the entire record from table1 into table2 2. if the PK does not exist then add the record i need help with both the sql statement and the VBA i guess the VBA should be a loop, going through every record in table1. inside the loop i should have the select statement

    Read the article

  • online backup solution with api for desktop

    - by user161179
    I made a small backup application that simply creates an archive out specified files and folders. Now I need an online service to backup that online. Which service can i use that can be integrated into my app ? Options I considered: dropbox is ideal, but they have all but abandoned the desktop. skydrive has no api. I couldn't find any free reliable backup service that uses ftp . anything else ? it should provide 1-2 gb of free space and be reasonably reliable. Thanks My app is in C#, but can be ported to any other language as well..

    Read the article

  • Javascript timezone solution needed

    - by user198729
    I have unix timestamps from time zone X which is not known, the current timestamp(now()) in TZ X is known 1275143019, how to approach a javascript function so that it can generate the datetime in the users current TZ in the format 2010-05-29 15:32:35 ?

    Read the article

  • What's the solution for this gtk warning?

    - by Runner
    GtkWidget *textview; ... textview = gtk_text_view_new (); ... buffer = gtk_text_view_get_buffer (textview); At the last line I pasted I got this warning: warning C4133: 'function' : incompatible types - from 'GtkWidget *' to 'GtkTextView *' How can I fix that?

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • FOSS solution for a local machine: DNS

    - by Shyam
    Hi, I love my Mac. But I have always found that my DNS lookups are as slow, even while flushing caches and I travel over know roads on the Internet. I was wondering if someone would know something a bit more automatic/intelligent than /etc/hosts and less complex and iron forged as BIND. Thank you for your feedback and answers!

    Read the article

  • Porting Python algorithm to C++ - different solution

    - by cb0
    Hello, I have written a little brute string generation script in python to generate all possible combinations of an alphabet within a given length. It works quite nice, but for the reason I wan't it to be faster I try to port it to C++. The problem is that my C++ Code is creating far too much combination for one word. Heres my example in python: ./test.py gives me aaa aab aac aad aa aba .... while ./test (the c++ programm gives me) aaa aaa aaa aaa aa Here I also get all possible combinations, but I get them twice ore more often. Here is the Code for both programms: #!/usr/bin/env python import sys #Brute String Generator #Start it with ./brutestringer.py 4 6 "abcdefghijklmnopqrstuvwxyz1234567890" "" #will produce all strings with length 4 to 6 and chars from a to z and numbers 0 to 9 def rec(w, p, baseString): for c in "abcd": if (p<w - 1): rec(w, p + 1, baseString + "%c" % c) print baseString for b in range(3,4): rec(b, 0, "") And here the C++ Code #include <iostream> using namespace std; string chars="abcd"; void rec(int w,int b,string p){ unsigned int i; for(i=0;i<chars.size();i++){ if(b < (w-1)){ rec(w, (b+1), p+chars[i]); } cout << p << "\n"; } } int main () { int a=3, b=0; rec (a+1,b, ""); return 0; } Does anybody see my fault ? I don't have much experience with C++. Thanks indeed

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >