Search Results

Search found 1586 results on 64 pages for 'devil night'.

Page 53/64 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Why does autoboxing in Java allow me to have 3 possible values for a boolean?

    - by John
    Reference: http://java.sun.com/j2se/1.5.0/docs/guide/language/autoboxing.html If your program tries to autounbox null, it will throw a NullPointerException. javac will give you a compile-time error if you try to assign null to a boolean. makes sense. assigning null to a Boolean is a-ok though. also makes sense, i guess. but let's think about the fact that you'll get a NPE when trying to autounbox null. what this means is that you can't safely perform boolean operations on Booleans without null-checking or exception handling. same goes for doing math operations on an Integer. for a long time, i was a fan of autoboxing in java1.5+ because I thought it got java closer to be truly object-oriented. but, after running into this problem last night, i gotta say that i think this sucks. the compiler giving me an error when I'm trying to do stuff with an uninitialized primitive is a good thing. I think I may be misunderstanding the point of autoboxing, but at the same time I will never accept that a boolean should be able to have 3 values. can anyone explain this? what am i not getting?

    Read the article

  • Music meta missing from Facebook /home

    - by Peter Watts
    When somebody shares a Spotify playlist, the attachment is missing from the Graph API. What is shown in Facebook: What is returned by the Graph API: { "id": "********_******", "from": { "name": "*****", "id": "*****" }, "message": "Refused's setlist from last night's secret show in Sweden...", "icon": "http://photos-c.ak.fbcdn.net/photos-ak-snc1/v85005/74/174829003346/app_2_174829003346_5511.gif", "actions": [ { "name": "Comment", "link": "http://www.facebook.com/*****/posts/*****" }, { "name": "Like", "link": "http://www.facebook.com/*****/posts/*****" }, { "name": "Get Spotify", "link": "http://www.spotify.com/redirect/download-social" } ], "type": "link", "application": { "name": "Spotify", "canvas_name": "get-spotify", "namespace": "get-spotify", "id": "174829003346" }, "created_time": "2012-03-01T22:24:28+0000", "updated_time": "2012-03-01T22:24:28+0000", "likes": { "data": [ { "name": "***** *****", "id": "*****" } ], "count": 1 }, "comments": { "count": 0 }, "is_published": true } There's absolutely no reference to an attachment, other than the fact the type is 'link' and the application is Spotify. If you want to test, Spotify's page (http://graph.facebook.com/spotify/feed) usually has a playlist or two embedded (and missing from Graph API). Also if you filter your home feed to just Spotify stories (http://graph.facebook.com/me/home?filter=app_174829003346), you'll get a bunch of useless stories without attachments (assuming your friends shared music recently) Anyone have any ideas how to access the playlist details, or is it unavailable to third party developers (if so, this is a very a bad user experience, because the story makes no sense without the attachment). I am able to fetch scrobbles without any trouble using the user_actions.listens. Also, if there is a recent activity story, e.g. "Peter listened to The Shins", I am able to get information about the band. The problem only happens on attachments.

    Read the article

  • How to terminate a particular Azure worker role instance

    - by Oliver Bock
    Background I am trying to work out the best structure for an Azure application. Each of my worker roles will spin up multiple long-running jobs. Over time I can transfer jobs from one instance to another by switching them to a readonly mode on the source instance, spinning them up on the target instance, and then spinning the original down on the source instance. If I have too many jobs then I can tell Azure to spin up extra role instance, and use them for new jobs. Conversely if my load drops (e.g. during the night) then I can consolidate outstanding jobs to a few machines and tell Azure to give me fewer instances. The trouble is that (as I understand it) Azure provides no mechanism to allow me to decide which instance to stop. Thus I cannot know which servers to consolidate onto, and some of my jobs will die when their instance stops, causing delays for users while I restart those jobs on surviving instances. Idea 1: I decide which instance to stop, and return from its Run(). I then tell Azure to reduce my instance count by one, and hope it concludes that the broken instance is a good candidate. Has anyone tried anything like this? Idea 2: I predefine a whole bunch of different worker roles, with identical contents. I can individually stop and start them by switching their instance count from zero to one, and back again. I think this idea would work, but I don't like it because it seems to go against the natural Azure way of doing things, and because it involves me in a lot of extra bookkeeping to manage the extra worker roles. Idea 3: Live with it. Any better ideas?

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • Basic compile issue with QT4

    - by Cobus Kruger
    I've been trying to get a dead simple listing from a university textbook to compile with the newest QT SDK for Windows I downloaded last night. After struggling through the regular nonsense (no make.bat, need to manually add environment variables and so on) I am finally at the point where I can build. But only one of the two libraries seem to work. The .pro file I use is dead simple: SUBDIRS += utils \ dataobjects TEMPLATE = subdirs In each of these two subfolders I have the source for a library. Running QMAKE generates a makefile and running Make runs through all the preliminaries and then fails on the g++ call: g++ -enable-stdcall-fixup -Wl,-enable-auto-import -Wl,-enable-runtime-pseudo-reloc --out-implib,libdataobjects.a -shared -mthreads -Wl -Wl,--out-implib,c:\Users\Cobus\workspace\lib\libdataobjects.a -o ..\..\lib\dataobjects.dll object_script.dataobjects.Debug -L"c:\Users\Cobus\Portab~1\Qt\2010.02.1\qt\lib" -LC:\Users\Cobus\workspace\lib -lutils -lQtXmld4 -lQtGuid4 -lQtCored4 c:/users/cobus/portab~1/qt/2010.02.1/mingw/bin/../lib/gcc/mingw32/4.4.0/../../../../mingw32/bin/ld.exe: cannot find -lutils The problem seems to be right near the end of the command line, where -lutils is added, indicating that there is a library by the name of utils. While I would have expected to see that, you'll notice the library names after --out include lib in the name, so they become libutils and libdataobjects. I have tried to figure out why this is happening, to no avail. Anyone have an idea what's going on?

    Read the article

  • Is MVVM killing silverlight development?

    - by DeanMc
    This is a question I have had rattling around in my head for some time. I had a chat with a guy the other night who told me he would not be using the navigational framework because he could not figure out how it works with MVVM. As much as I tried to explain that patterns should be taken with a pinch of salt he would not listen. My point is this, patterns are great when they solve some problem. Sometimes only part of the pattern solves a particular problem while the other parts of it cause different problems. The goal of any developer is to build a solid application using a combination of patterns know how and foresight. I feel MVVM is becoming the one pattern to rule them all. As it is not directly supported by .Net some fancy business is needed to make it work. I feel that people are missing the point of the pattern, which is loosely coupled, testable code and instead jumping through hoops and missing out on great experiences trying to follow MVVM to the letter. MVVM is great but I wish it came with a warning or disclaimer for newbies as my fear is people will shy away from silverlight development for fear of being smacked with the mvvm stick. EDIT: Can I just add as an edit, I use and agree with MVVM as a pattern I know when it is and isn't feasible in my projects. My issue is with the encompassing nature it is taking, as if it HAS to be used as part of development. It is being used as an integral feature and not a pattern, which it is.

    Read the article

  • ASP.NET retrieve Average CPU Usage

    - by Sam
    Last night I did a load test on a site. I found that one of my shared caches is a bottleneck. I'm using a ReaderWriterLockSlim to control the updates of the data. Unfortunately at one point there are ~200 requests trying to update the data at approximately the same time. This also coincided with CPU usage spikes. The data being updated is in the ASP.NET Cache. What I'd like to do is if the CPU usage is around 75%, I'd like to just skip the cache and hit the database on another machine. My problem is that I don't know how expensive it is to create a new performance counter to check the cpu usage. Also, if I would probably like the average cpu usage over the last 2 or 3 seconds. However, I can't sit there and calculate the cpu time as that would take longer than it's taking to update the cache currently. Is there an easy way to get the average CPU usage? Are there any drawbacks to this? I'm also considering totaling the wait count for the lock and then at a certain threshold switch over to the database. The concern I had with this approach would be that changing hardware might allow more locks with less of a strain on the system. And also finding the right balance for the threshold would be cumbersome and it doesn't take into account any other load on the machine. But it's a simple approach, and simple is 99% of the time better.

    Read the article

  • Android ADB has moved and Eclipse is looking in the old place

    - by Peter Nelson
    I did an SDK update last night and it moved adb.exe. In its place it left a file called "adb_has_moved.txt" saying The adb tool has moved to platform-tools/ If you don't see this directory in your SDK, launch the SDK and AVD Manager (execute the android tool) and install "Android SDK Platform-tools" Please also update your PATH environment variable to include the platform-tools/ directory, so you can execute adb from any location. So I did all that, including the PATH and now I can start adb.exe from any DOS prompt. But I still can't start it from Eclipse (Galileo 3.52). When I try it says Location of the Android SDK has not been set up in the preferences ... which is not true. The SDK IS set up in Preferences. The real problem is at the top of the Preferences window where it says "Could not find C:\SDK\android-sdk-windows\ tools \adb.exe!" ...No kidding - the update moved it to C:\SDK\android-sdk-windows\ platform-tools. Because it's specifying a specific (wrong) path Eclipse is bypassing the PATH variable. So how do I get Eclipse to look in the right place?

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • SVN Serve, Missing a Directory

    - by Ryan Smith
    I'm sure this is an asinine question, and I blame myself for not fully understanding how the SVNSERVE process works. I have an SVN repo, but it needs to be moved to a server within a clients cloud. I did this a while back and ran into the issue of the SVNSERVE.exe process not getting set to the right directory. I have the SVNSERVE.exe process running as a windows service and pointing to the right directory. There are two other repos there that are serving out fine in the same directory. I copied out the new directory just like I did with the others, but I'm getting the error "No repository found". I thought that SVNSERVE just looked at that directory and served out the repositories that were there, but I have had a hard time finding more information about that. I thought it was a Windows permission problem, but I set the whole folder to be full control to EVERYONE, so that's not it. I feel horrible I didn't fully understand this problem the first time I fought it, but it's late on a Sunday night and clients are yelling. Anyone know what I'm missing? Thanks. EDIT: It's specific to the repository. I tested the same process with some of the other repos we have on our server and when I copied them up, they worked just as expected. This bug is breaking me and I wish I could provide more details, but that's all I know. I'm going to try to do an SVN Dump instead of an XCopy and see how that goes. I'll let you know.

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • How to store an inventory using hashtables?

    - by Harm De Weirdt
    Hello everyone. For an assignment in collego we have to make a script in Perl that allows us to manage an inventory for an e-store. (The example given was Amazon) Users can make orders in a fully text-based environment and the inventory must be updated when an order is completed. Every item in the inventory has 3 to 4 attributes: a product code, a title, a price and for some an amount (MP3's for example do not have this attribute) Since this is my first encounter with Perl, i don't really know how to start. My main problem is how i should "implement" the inventory in the program. One of the functions of the program is searching trough the titles. Another is to make an order, where the user should give a product code. My first idea was a hashtable with the productcode as key. But if i wanted to search in the titles that could be a problem because of this: the hashkey would be something like DVD-123, the information belonging to that key could be "The Green Mask 12" (without the ") where the 12 indicates how many of this DVD are currently in stock. So i'd have to find a way to ignore the 12 in the end. Another solution was to use the title as Hashkey, but that would prove cumbersome too I think. Is there a way to make a hashtable with 2 key's, and when I give only one it returns an array with the other values? (Including the other key and the other information) That way I could use another key depending on what info I need from my inventory. We have to read the default inventory from a txt file looking like this: MP3-72|Lady Gaga - Kiss and Run (Fear of Commitment Monster)|0.99 CD-400|Kings of Leon - Only By The Night|14.50|2 MP3-401|Kings of Leon - Closer|0.85 DVD-144|Live Free or Die Hard|14.99|2 SOFT-864|Windows Vista|49.95 Any help would be appreciated very much :) PS: I am sorry for my bad grammar, English isn't my native language.

    Read the article

  • Unit Testing-- fundamental goal?

    - by David
    Me and my co-workers had a bit of a disagreement last night about unit testing in our PHP/MySQL application. Half of us argued that when unit testing a function within a class, you should mock everything outside of that class and its parents. The other half of us argued that you SHOULDN'T mock anything that is a direct dependancy of the class either. The specific example was our logging mechanism, which happened through a static Logging class, and we had a number of Logging::log() calls in various locations throughout our application. The first half of us said the Logging mechanism should be faked (mocked) because it would be tested in the Logging unit tests. The second half of us argued that we should include the original Logging class in our unit test so that if we make a change to our logging interface, we'll be able to see if it creates problems in other parts of the application due to failing to update the call interface. So I guess the fundamental question is-- do unit tests serve to test the functionality of a single unit in a closed environment, or show the consequences of changes to a single unit in a larger environment? If it's one of these, how do you accomplish the other?

    Read the article

  • Screen information while Windows system is locked (.NET)

    - by Matt
    We have a nightly process that updates applications on a user's pc, and that requires bringing the application down and back up again (not looking to get into changing that process). The problem is that we are building a Windows AppBar on launch which requires a valid screen, and when the system is locked there isn't one in the Screen class. So none of the visual effects are enabled and it shows up real ugly. The only way we currently have around this is to detect a locked screen and just spin and wait until the user unlocks the desktop, then continue launching. Leaving it down isn't an option, as this is a key part of our user's workflow, and they expect it to be up and running if they left it that way the night before. Any ideas?? I can't seem to find the display information anywhere, but it has to be stored off someplace, since the user is still logged in. The contents of the Screen.AllScreens array: ** When Locked: Device Name : DISPLAY Primary : True Bits Per Pixel : 0 Bounds : {X=-1280,Y=0,Width=2560,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=1024} ** When Unlocked: Device Name : \\.\DISPLAY1 Primary : True Bits Per Pixel : 32 Bounds : {X=0,Y=0,Width=1280,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=994} Device Name : \\.\DISPLAY2 Primary : False Bits Per Pixel : 32 Bounds : {X=-1280,Y=0,Width=1280,Height=1024} Working Area : {X=-1280,Y=0,Width=1280,Height=964}

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

  • Fill parent container

    - by manix
    After several hours of design battle I come to you for a hand. I am building a website for a night club as you can see. I can't get stretch the centered area (bordered by yellow color) to the bottom of the page where the footer start (the footer is the green top-bordered div). This happends because the content is not enought to fill the rest of heigh. This is my css html, body{ height: 100%; margin: 0 auto; } #container{ height: auto !important; height: 100%; margin: 0 auto -50px; /* as #footer height */ min-height: 100%; text-align: center; border: 5px solid blue; } #centered-container{ width: 950px; margin: 0 auto; text-align: left; border: 5px solid yellow; } #body-container{ border: 5px solid red; } #footer, .footer{ height: 50px; } #footer{ text-align: center; border-top: 5px solid green; } And this is my html markup <body> <div id="container"> <!-- BLUE BORDER --> <div id="centered-container"> <!-- YELLOW BORDER --> <div id="body-container"> <!-- RED BORDER --> </div> </div> <div class="footer"></div> <!-- GREEN BORDER --> </div> <div id="footer"></div> </body> Expected behaviour: Additional facts - The colored borders is just for debugging porpuses

    Read the article

  • Does fast typing influence fast programming? [closed]

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • Large scale perspective lights casting shadow maps, in the most optimized way?

    - by meds
    I'm using projected texture shadows coupled with lights to light a large sports field at night. To do this I'm using shadow cameras which I place in the position of the stadiums lights and shine it down on the field at the appropriate angle. The problem with this method is the textures to which I render the shadows into have to be very large so they can keep sufficient detail over the entire stadium. This is incredibly under optimized since at any given point the players attention is only directed on a small portion of the field meaning large chunks of the texture just take up space wit no benefits. However the issue is the lights need to be perspective based as they come from actual directional lights hovering over the stadium. The way to solve this, I believe, is to figure out in the shadow cameras view matrix it would be to place the actual camera to render from, and adjust the view matrix accordingly to the position it is. So my question is, how can I calculate the optimal position to put the shadow camera and calculate its view matrix such that the shadows it projects will appear to be coming from the light source rather than the camera?

    Read the article

  • After I apply custom logic, next UI action crashes my app.

    - by DanF
    I've got an team (eveningRoster) that I'm making a button add employees to. The team is really a relationship to that night's event, but it's represented with an AC. I wanted to make sure an employee did not belong to the team before it adds, so I added a method to MyDocument to check first. It seems to work, the error logs complete, but after I've added a member, the next time I click anything, the program crashes. Any guesses why? Here's the code: -(IBAction)playsTonight:(id)sender { NSArray *selection = [fullRoster selectedObjects]; NSArray *existing = [eveningRoster arrangedObjects]; //Result will be within each loop. BOOL result; //noDuplicates will stay YES until a duplicate is found. BOOL noDuplicates = YES; //For the loop: int count; for (count = 0; count < [selection count]; count++){ result = [existing containsObject:[selection objectAtIndex:count]]; if (result == YES){ NSLog(@"Duplicate found!"); noDuplicates = NO; } } if (noDuplicates == YES){ [eveningRoster addObjects:[fullRoster selectedObjects]]; NSLog(@"selected objects added."); [eveningTable reloadData]; NSLog(@"Table reloaded."); } [selection release]; [existing release]; return; }

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • Reading from txt ruins formatting

    - by Sorin Grecu
    I'm saving some values to a txt file and after that i want to be able to show it in a dialog.All good,done that but though the values in the txt file are formated nicely,like this : Aaaaaaaa 0.55 1 Bbbbb 1 2.2 CCCCCCCCC 3 0.22 When reading and setting them to a textview they get all messy like this : Aaaaaaaa 0.55 1 Bbbbb 1 2.2 CCCCCCCCC 3 0.22 My writting method : FileOutputStream f = new FileOutputStream(file); PrintWriter pw = new PrintWriter(f); for (int n = 0; n < allpret.size() - 1; n++) { if (cant[n] != Float.toString(0) && pret[n] != Float.toString(0)) { String myFormat = "%-20s %-5s %-5s%n"; pw.println(String.format(myFormat, prod[n], cant[n], pret[n])); My reading method : StringBuilder text = new StringBuilder(); try { BufferedReader br = new BufferedReader(new FileReader(file)); String line; while ((line = br.readLine()) != null) { text.append(line); text.append('\n'); } } catch (IOException e) { } Why does it ruin the amount of spaces and how can i fix this ? Thanks and have a good night !

    Read the article

  • Errors with redefinitions after upgrade to XCode 3.2.3

    - by CA Bearsfan
    I recently upgraded to Snow Leopard and Xcode 3.2.5 so I could test on my iPod Touch and iPhone and ran into some problems with the project I was working on. First it couldn't find a Base SDK, then my old frameworks weren't hooking up correctly. Finally after setting the Project Format to Xcode 3.1 compatible (3.2 also worked) and the Base SDK for all configurations to iOS 4.2, then setting my iOS deployment target to iOS 3.0 I was able to get the system to find a Base SDK and attempt a build. That's when the frameworks didn't want to cooperate. 4/6 I'm using displayed in red, so I re routed the path to the iPhone simulator 4.2 platform which worked perfectly. I was able to build my project, no errors or warnings and my app worked fine. I went to work last night thinking I had fixed the problem. This morning I fired up the laptop and went to build my code base and now have 1142 errors all of which have to do with code I haven't written deemed as being redefined. Suggestions? The following is just a small sample of the error list (obviously don't need to see all 1142) //Frameworks/Foundation.framework/Headers/NSZone.h:48: error: redefinition of 'NSMakeCollectable' /Frameworks/Foundation.framework/Headers/NSObject.h:65: error: duplicate interface declaration for class 'NSObject' /Frameworks/Foundation.framework/Headers/NSObject.h:67: error: redefinition of 'struct NSObject'

    Read the article

  • Is .NET 4.0 just a show?

    - by Will Marcouiller
    I went to a presentation about the .NET Framework and Visual Studio 2010, last night. The topis were: ASP.NET 4 - Some of the new features of ASP.NET 4 More control over ClientID's in WebForms; Output Caching; ... // Some other stuff I don't really remember being more in framework and WinForms world. Entity Framework 2.0 (.NET 4.0) T4 Templates; Domain driven development; Data driven development; Contexts (edmx files); Some of real-world limitations of EF4 (projects with over 70 to 75 tables); Better POCO support, despite there are still these hidden EntityObject and StructuralObject, but used differently in comparison to EF 1.0 so that it doesn't take off your inheritance; Allows to easily choose how to persist the hierarchy into the underlying database; Code only (start working with EF4 directly from your code!); Design by Contract (DbC). The most interesting feature is, and only, as far as I'm concerned, all related to parallelism made easier. Which really works! No additional assembly references to add. In conclusion, I'm far from impressed about .NET Framework 4.0, apart that it makes some things easier to do. But when you're used to make it a way, it doesn't really change much, in my opinion. Is it me who cannot foresee what .NET 4.0 has to offer? What would you guys base your decision on to migrate to .NET 4.0, in a practical way?

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >