Search Results

Search found 12907 results on 517 pages for 'todd little'.

Page 476/517 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Thoughts on streamlining multiple .Net apps

    - by John Virgolino
    We have a series of ASP.Net applications that have been written over the course of 8 years. Mostly in the first 3-4 years. They have been running quite well with little maintenance, but new functionality is being requested and we are running into IDE and platform issues. The apps were written in .Net 1.x and 2.x and run in separate spaces but are presented as a single suite of applications which use a common navigation toolbar (implemented as a user control). Every time we want to add something to a menu in the nav we have to modify it in all the apps which is a pain. Also, the various versions of Crystal reports and that we used tables to organize the visual elements and we end up with a mess, especially with all the multi-platform .Net versions running. We need to streamline the suite of apps and make it easier to add on new apps without a hassle. We also need to bring all these apps under one .Net platform and IDE. In addition, there is a WordPress blog styled to match the style of the application suite "integrated" into the UI and a link to a MediaWiki Wiki application as well. My current thinking is to use an open source content management system (CMS) like Joomla (PHP based unfortunately, but it works well) as the user interface framework for style templating and menu management. Joomla's article management would allow us to migrate the Wiki content into articles which could be published without interfering with the .Net apps. Then essentially use an IFrame within an "article" to "host" the .Net application, then... Upgrade the .Net apps to VS2010, strip out all the common header/footer controls and migrate the styles to use the style sheets used in the CMS. As I write this, I certainly realize this is a lot of work and there are optimization issues which this may cause as well as using IFrames seems a bit like cheating and I've read about issues with IFrames. I know that we could use .Net application styling, but it seems like a lot more work (not sure really). Also, the use of a CMS to handle the blog and wiki also seems appealing, unless there is a .Net CMS out there that can handle all of these requirements. Given this information, I am looking to know if I am totally going in the wrong direction? We tried to use open source and integrate it over time, but not this has become hard to maintain. Am I not aware of some technology out there that will meet our requirements? Did we do this right and should we just focus on getting the .Net streamlined? I understand that no matter what we do, it's going to be a lot of work. The communities considerable experience would be helpful. Thanks!! PS - A complete rewrite is not an option.

    Read the article

  • Recommended ASP.NET Shared Hosting

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta support. Not a big deal to me.

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • Remote XP -> Win98 WMI Connection

    - by Logan Young
    I've asked this on Technet, but because Win98 is no longer supported, I can't get any decent information, I was hoping there might be some "old school" developers here who might be able to help me. There is an application that we use a lot at work. This application should run 8am-5pm with as little interruption as possible. Most of the computers where this application runs are using Win98, and we have no way to upgrade them because we can't buy new hardware at the moment. My computer is running WinXP, so I thought of a way to make sure that this application runs all the time: The idea I had was to develop a Windows Service that executes a VBScript file that contains a WMI query to get a list of processes from each computer. Each list is then examined, and, depending on whether or not the target application is running, it will either do nothing, or it will execute another VBScript file that contains a WMI query that will be used to start the target application remotely. I later found a way to do this all with 1 VBScript file (see code below) My problem is in the remote connection to the target computers. I've installed WMI Core 1.5 on them, but every time I try the remote connection, I get the following: The remote server is unavailable or does not exist: 'GetObject' VBScript runtime error 800A01CE I've done some research, and all I've found is info about DCOM Config and Windows Firewall, but Win98 doesn't have either of these. ' #### Variables and constants #### Const HIDDEN_WINDOW = 12 Dim T ' #### End Variables and constants #### Main() Sub Main() ' #### Get Process information from WMI Computer = "." Set WMI = GetObject("winmgmts:" & _ "{ImpersonationLevel=Impersonate}!\\" & Computer & "\root\cimv2") Set Settings = WMI.ExecQuery("SELECT * FROM Win32_Process") For Each Process In Settings ' #### If the application is found to be running, set a value to indicate this If Process.Name = "NOTEPAD.EXE" Then T = True End If Next ' #### T will only have a value if the application is not running. We therefore ' #### evaluate it to determine if it has a value or not. If not, start the application If Not T Then 'MsgBox("Application not found.") Set Startup = WMI.Get("Win32_ProcessStartup") Set Config = Startup.SpawnInstance_ Config.ShowWindow = HIDDEN_WINDOW Set Process = GetObject("winmgmts:root\cimv2:Win32_Process") errReturn = Process.Create(_ "C:\Windows\notepad.exe", null, Config, intProcessID) End If End Sub This uses WMI to get the list of processes from the local computer and, if the target application is running, it'll do nothing, otherwise it'll forcefully start the target application. The problem is that this works only if I specify the local comuter, if I target another computer, I get the error mentioned above. Does anyone have any ideas? Thanks in advance for the help!

    Read the article

  • Image animation problem in silverlight

    - by Jak
    Hi followed " http://www.switchonthecode.com/tutorials/silverlight-3-tutorial-planeprojection-and-perspective-3d#comment-4688 ".. the animation is working fine. I am new to silver light. when i use dynamic image from xml instead of static image as in tutorial,.. it is not working fine, please help me on this. i used list box.. for this animation effect do i need to change listbox to some other arrangement ? if your answer yes means, pls give me some sample code. Thanks in advance. Xaml code: <ListBox Name="listBox1"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <Image Source="{Binding imgurl}" HorizontalAlignment="Left" Name="image1" Stretch="Fill" VerticalAlignment="Top" MouseLeftButtonUp="FlipImage" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> My C# code: //getting image URL from xml XElement xmlads = XElement.Parse(e.Result); //i bind the url in to listBox listBox1.ItemsSource = from ads in xmlads.Descendants("ad") select new zestItem { imgurl = ads.Element("picture").Value }; public class zestItem { public string imgurl { get; set; } } private int _zIndex = 10; private void FlipImage(object sender, MouseButtonEventArgs e) { Image image = sender as Image; // Make sure the image is on top of all other images. image.SetValue(Canvas.ZIndexProperty, _zIndex++); // Create the storyboard. Storyboard flip = new Storyboard(); // Create animation and set the duration to 1 second. DoubleAnimation animation = new DoubleAnimation() { Duration = new TimeSpan(0, 0, 1) }; // Add the animation to the storyboard. flip.Children.Add(animation); // Create a projection for the image if it doesn't have one. if (image.Projection == null) { // Set the center of rotation to -0.01, which will put a little space // between the images when they're flipped. image.Projection = new PlaneProjection() { CenterOfRotationX = -0.01 }; } PlaneProjection projection = image.Projection as PlaneProjection; // Set the from and to properties based on the current flip direction of // the image. if (projection.RotationY == 0) { animation.To = 180; } else { animation.From = 180; animation.To = 0; } // Tell the animation to animation the image's PlaneProjection object. Storyboard.SetTarget(animation, projection); // Tell the animation to animation the RotationYProperty. Storyboard.SetTargetProperty(animation, new PropertyPath(PlaneProjection.RotationYProperty)); flip.Begin(); }

    Read the article

  • Count seconds and minutes with MCU timer/interrupt?

    - by arynhard
    I am trying to figure out how to create a timer for my C8051F020 MCU. The following code uses the value passed to init_Timer2() with the following formula: 65535-(0.1 / (12/2000000)=48868. I set up the timer to count every time it executes and for every 10 counts, count one second. This is based on the above formula. 48868 when passed to init_Timer2 will produce a 0.1 second delay. It would take ten of them per second. However, when I test the timer it is a little fast. At ten seconds the timer reports 11 seconds, at 20 seconds the timer reports 22 seconds. I would like to get as close to a perfect second as I can. Here is my code: #include <compiler_defs.h> #include <C8051F020_defs.h> void init_Clock(void); void init_Watchdog(void); void init_Ports(void); void init_Timer2(unsigned int counts); void start_Timer2(void); void timer2_ISR(void); unsigned int timer2_Count; unsigned int seconds; unsigned int minutes; int main(void) { init_Clock(); init_Watchdog(); init_Ports(); start_Timer2(); P5 &= 0xFF; while (1); } //============================================================= //Functions //============================================================= void init_Clock(void) { OSCICN = 0x04; //2Mhz //OSCICN = 0x07; //16Mhz } void init_Watchdog(void) { //Disable watchdog timer WDTCN = 0xDE; WDTCN = 0xAD; } void init_Ports(void) { XBR0 = 0x00; XBR1 = 0x00; XBR2 = 0x40; P0 = 0x00; P0MDOUT = 0x00; P5 = 0x00; //Set P5 to 1111 P74OUT = 0x08; //Set P5 4 - 7 (LEDs) to push pull (Output) } void init_Timer2(unsigned int counts) { CKCON = 0x00; //Set all timers to system clock divided by 12 T2CON = 0x00; //Set timer 2 to timer mode RCAP2 = counts; T2 = 0xFFFF; //655535 IE |= 0x20; //Enable timer 2 T2CON |= 0x04; //Start timer 2 } void start_Timer2(void) { EA = 0; init_Timer2(48868); EA = 1; } void timer2_ISR(void) interrupt 5 { T2CON &= ~(0x80); P5 ^= 0xF0; timer2_Count++; if(timer2_Count % 10 == 0) { seconds++; } if(seconds % 60 == 0 && seconds != 0) { minutes++; } }

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • Nested WHILE loops not acting as expected - Javascript / Google Apps Script

    - by dthor
    I've got a function that isn't acting as intended. Before I continue, I'd like preface this with the fact that I normally program in Mathematica and have been tasked with porting over a Mathematica function (that I wrote) to JavaScript so that it can be used in a Google Docs spreadsheet. I have about 3 hours of JavaScript experience... The entire (small) project is calculating the Gross Die per Wafer, given a wafer and die size (among other inputs). The part that isn't working is where I check to see if any corner of the die is outside of the effective radius, Reff. The function takes a list of X and Y coordinates which, when combined, create the individual XY coord of the center of the die. That is then put into a separate function "maxDistance" that calculates the distance of each of the 4 corners and returns the max. This max value is checked against Reff. If the max is inside the radius, 1 is added to the die count. // Take a list of X and Y values and calculate the Gross Die per Wafer function CoordsToGDW(Reff,xSize,ySize,xCoords,yCoords) { // Initialize Variables var count = 0; // Nested loops create all the x,y coords of the die centers for (var i = 0; i < xCoords.length; i++) { for (var j = 0; j < yCoords.length, j++) { // Add 1 to the die count if the distance is within the effective radius if (maxDistance(xCoords[i],yCoords[j],xSize,ySize) <= Reff) {count = count + 1} } } return count; } Here are some examples of what I'm getting: xArray={-52.25, -42.75, -33.25, -23.75, -14.25, -4.75, 4.75, 14.25, 23.75, 33.25, 42.75, 52.25, 61.75} yArray={-52.5, -45.5, -38.5, -31.5, -24.5, -17.5, -10.5, -3.5, 3.5, 10.5, 17.5, 24.5, 31.5, 38.5, 45.5, 52.5, 59.5} CoordsToGDW(45,9.5,7.0,xArray,yArray) returns: 49 (should be 72) xArray={-36, -28, -20, -12, -4, 4, 12, 20, 28, 36, 44} yArray={-39, -33, -27, -21, -15, -9, -3, 3, 9, 15, 21, 27, 33, 39, 45} CoordsToGDW(32.5,8,6,xArray,yArray) returns: 39 (should be 48) I know that maxDistance() is returning the correct values. So, where's my simple mistake? Also, please forgive me writing some things in Mathematica notation... Edit #1: A little bit of formatting. Edit #2: Per showi, I've changed WHILE loops to FOR loops and replaced <= with <. Still not the right answer. It did clean things up a bit though... Edit #3: What I'm essentially trying to do is take [a,b] and [a,b,c] and return [[a,a],[a,b],[a,c],[b,a],[b,b],[b,c]]

    Read the article

  • Jumping into argv?

    - by jth
    Hi, I`am experimenting with shellcode and stumbled upon the nop-slide technique. I wrote a little tool that takes buffer-size as a parameter and constructs a buffer like this: [ NOP | SC | RET ], with NOP taking half of the buffer, followed by the shellcode and the rest filled with the (guessed) return address. Its very similar to the tool aleph1 described in his famous paper. My vulnerable test-app is the same as in his paper: int main(int argc, char **argv) { char little_array[512]; if(argc>1) strcpy(little_array,argv[1]); return 0; } I tested it and well, it works: jth@insecure:~/no_nx_no_aslr$ ./victim $(./exploit 604 0) $ exit But honestly, I have no idea why. Okay, the saved eip was overwritten as intended, but instead of jumping somewhere into the buffer, it jumped into argv, I think. gdb showed up the following addresses before strcpy() was called: (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x80483ed in main (victim.c:7); saved eip 0x154b56 source language c. Arglist at 0xbffff1e8, args: argc=2, argv=0xbffff294 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec Address of little_array: (gdb) print &little_array[0] $1 = 0xbfffefe8 "\020" After strcpy(): (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x804840d in main (victim.c:10); saved eip 0xbffff458 source language c. Arglist at 0xbffff1e8, args: argc=-1073744808, argv=0xbffff458 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec So, what happened here? I used a 604 byte buffer to overflow little_array, so he certainly overwrote saved ebp, saved eip and argc and also argv with the guessed address 0xbffff458. Then, after returning, EIP pointed at 0xbffff458. But little_buffer resides at 0xbfffefe8, that`s a difference of 1136 byte, so he certainly isn't executing little_array. I followed execution with the stepi command and well, at 0xbffff458 and onwards, he executes NOPs and reaches the shellcode. I'am not quite sure why this is happening. First of all, am I correct that he executes my shellcode in argv, not little_array? And where does the loader(?) place argv onto the stack? I thought it follows immediately after argc, but between argc and 0xbffff458, there is a gap of 620 bytes. How is it possible that he successfully "lands" in the NOP-Pad at Address 0xbffff458, which is way above the saved eip at 0xbffff1ec? Can someone clarify this? I have actually no idea why this is working. My test-machine is an Ubuntu 9.10 32-Bit Machine without ASLR. victim has an executable stack, set with execstack -s. Thanks in advance.

    Read the article

  • Why does the Ternary\Conditional operator seem significantly faster

    - by Jodrell
    Following on from this question, which I have partially answered. I compile this console app in x64 Release Mode, with optimizations on, and run it from the command line without a debugger attached. using System; using System.Diagnostics; class Program { static void Main() { var stopwatch = new Stopwatch(); var ternary = Looper(10, Ternary); var normal = Looper(10, Normal); if (ternary != normal) { throw new Exception(); } stopwatch.Start(); ternary = Looper(10000000, Ternary); stopWatch.Stop(); Console.WriteLine( "Ternary took {0}ms", stopwatch.ElapsedMilliseconds); stopwatch.Start(); normal = Looper(10000000, Normal); stopWatch.Stop(); Console.WriteLine( "Normal took {0}ms", stopwatch.ElapsedMilliseconds); if (ternary != normal) { throw new Exception(); } Console.ReadKey(); } static int Looper(int iterations, Func<bool, int, int> operation) { var result = 0; for (int i = 0; i < iterations; i++) { var condition = result % 11 == 4; var value = ((i * 11) / 3) % 5; result = operation(condition, value); } return result; } static int Ternary(bool condition, in value) { return value + (condition ? 2 : 1); } static int Normal(int iterations) { if (condition) { return = 2 + value; } return = 1 + value; } } I don't get any exceptions and the output to the console is somthing close to, Ternary took 107ms Normal took 230ms When I break down the CIL for the two logical functions I get this, ... Ternary ... { : ldarg.1 // push second arg : ldarg.0 // push first arg : brtrue.s T // if first arg is true jump to T : ldc.i4.1 // push int32(1) : br.s F // jump to F T: ldc.i4.2 // push int32(2) F: add // add either 1 or 2 to second arg : ret // return result } ... Normal ... { : ldarg.0 // push first arg : brfalse.s F // if first arg is false jump to F : ldc.i4.2 // push int32(2) : ldarg.1 // push second arg : add // add second arg to 2 : ret // return result F: ldc.i4.1 // push int32(1) : ldarg.1 // push second arg : add // add second arg to 1 : ret // return result } Whilst the Ternary CIL is a little shorter, it seems to me that the execution path through the CIL for either function takes 3 loads and 1 or 2 jumps and a return. Why does the Ternary function appear to be twice as fast. I underdtand that, in practice, they are both very quick and indeed, quich enough but, I would like to understand the discrepancy.

    Read the article

  • I need to modify a program to use arrays and a method call. Should I modify the running file, the data collection file, or both?

    - by g3n3rallyl0st
    I have to have multiple classes for this program. The problem is, I don't fully understand arrays and how they work, so I'm a little lost. I will post my program I have written thus far so you can see what I'm working with, but I don't expect anyone to DO my assignment for me. I just need to know where to start and I'll try to go from there. I think I need to use a double array since I will be working with decimals since it deals with money, and my method call needs to calculate total price for all items entered by the user. Please help: RUNNING FILE package inventory2; import java.util.Scanner; public class RunApp { public static void main(String[] args) { Scanner input = new Scanner( System.in ); DataCollection theProduct = new DataCollection(); String Name = ""; double pNumber = 0.0; double Units = 0.0; double Price = 0.0; while(true) { System.out.print("Enter Product Name: "); Name = input.next(); theProduct.setName(Name); if (Name.equalsIgnoreCase("stop")) { return; } System.out.print("Enter Product Number: "); pNumber = input.nextDouble(); theProduct.setpNumber(pNumber); System.out.print("Enter How Many Units in Stock: "); Units = input.nextDouble(); theProduct.setUnits(Units); System.out.print("Enter Price Per Unit: "); Price = input.nextDouble(); theProduct.setPrice(Price); System.out.print("\n Product Name: " + theProduct.getName()); System.out.print("\n Product Number: " + theProduct.getpNumber()); System.out.print("\n Amount of Units in Stock: " + theProduct.getUnits()); System.out.print("\n Price per Unit: " + theProduct.getPrice() + "\n\n"); System.out.printf("\n Total cost for %s in stock: $%.2f\n\n\n", theProduct.getName(), theProduct.calculatePrice()); } } } DATA COLLECTION FILE package inventory2; public class DataCollection { String productName; double productNumber, unitsInStock, unitPrice, totalPrice; public DataCollection() { productName = ""; productNumber = 0.0; unitsInStock = 0.0; unitPrice = 0.0; } //setter methods public void setName(String name) { productName = name; } public void setpNumber(double pNumber) { productNumber = pNumber; } public void setUnits(double units) { unitsInStock = units; } public void setPrice(double price) { unitPrice = price; } //getter methods public String getName() { return productName; } public double getpNumber() { return productNumber; } public double getUnits() { return unitsInStock; } public double getPrice() { return unitPrice; } public double calculatePrice() { return (unitsInStock * unitPrice); } }

    Read the article

  • Coping with feelings of technical mediocrity

    - by Karim
    As I've progressed as a programmer, I noticed more nuance and areas I could study in depth. In part, I've come to think of myself from, at one point, a "guru" to now much less, even mediocre or inadequate. Is this normal, or is it a sign of a destructive excessive ambition? Background I started to program when I was still a kid, I had about 10 or 11 years. I really enjoy my work and never get bored from it. It's amazing how somebody could be paid for what he really likes to do and would be doing it anyway even for free. When I first started to program, I was feeling proud of what I was doing, each application I built was for me a success and after 2-3 year I had a feeling that I'm a coding guru. It was a nice feeling. ;-) But the more I was in the field and the more types of software I started to develop, I was starting to have a feeling that I'm completely wrong in thinking I'm a guru. I felt that I'm not even a mediocre developer. Each new field I start to work on is giving me this feeling. Like when I once developed a device driver for a client, I saw how much I need to learn about device drivers. When I developed a video filter for an application, I saw how much do I still need to learn about DirectShow, Color Spaces, and all the theory behind that. The worst thing was when I started to learn algorithms. It was several years ago. I knew then the basic structures and algorithms like the sorting, some types of trees, some hashtables, strings, etc. and when I really wanted to learn a group of structures I learned about 5-6 new types and saw that in fact even this small group has several hundred subtypes of structures. It's depressing how little time people have in their lives to learn all this stuff. I'm now a software developer with about 10 years of experience and I still feel that I'm not a proficient developer when I think about things that others do in the industry.

    Read the article

  • Approaches for Content-based Item Recommendations

    - by PartlyCloudy
    Hello, I'm currently developing an application where I want to group similar items. Items (like videos) can be created by users and also their attributes can be altered or extended later (like new tags). Instead of relying on users' preferences as most collaborative filtering mechanisms do, I want to compare item similarity based on the items' attributes (like similar length, similar colors, similar set of tags, etc.). The computation is necessary for two main purposes: Suggesting x similar items for a given item and for clustering into groups of similar items. My application so far is follows an asynchronous design and I want to decouple this clustering component as far as possible. The creation of new items or the addition of new attributes for an existing item will be advertised by publishing events the component can then consume. Computations can be provided best-effort and "snapshotted", which means that I'm okay with the best result possible at a given point in time, although result quality will eventually increase. So I am now searching for appropriate algorithms to compute both similar items and clusters. At important constraint is scalability. Initially the application has to handle a few thousand items, but later million items might be possible as well. Of course, computations will then be executed on additional nodes, but the algorithm itself should scale. It would also be nice if the algorithm supports some kind of incremental mode on partial changes of the data. My initial thought of comparing each item with each other and storing the numerical similarity sounds a little bit crude. Also, it requires n*(n-1)/2 entries for storing all similarities and any change or new item will eventually cause n similarity computations. Thanks in advance! UPDATE tl;dr To clarify what I want, here is my targeted scenario: User generate entries (think of documents) User edit entry meta data (think of tags) And here is what my system should provide: List of similar entries to a given item as recommendation Clusters of similar entries Both calculations should be based on: The meta data/attributes of entries (i.e. usage of similar tags) Thus, the distance of two entries using appropriate metrics NOT based on user votings, preferences or actions (unlike collaborative filtering). Although users may create entries and change attributes, the computation should only take into account the items and their attributes, and not the users associated with (just like a system where only items and no users exist). Ideally, the algorithm should support: permanent changes of attributes of an entry incrementally compute similar entries/clusters on changes scale something better than a simple distance table, if possible (because of the O(n²) space complexity)

    Read the article

  • Printing a variable only when it changes?

    - by user1781639
    First off, my question was a little vague or confusing since I'm not really sure how to phrase my question to be specific. I'm trying to query a database of stockists for a Knitting company (school project using PHP) but I'm looking to print the city as a heading instead of with each stockists information. Here is what I have at the moment: $sql = "SELECT * FROM mc16korustockists where locale = 'south'"; $result = pg_exec($sql); $nrows = pg_numrows($result); print $nrows; $items = pg_fetch_all($result); print_r($items); for ($i=0; $i<$nrows2; $i++) { print "<h2>"; print $items[$i]['city']; print "</h2>"; print $items[$i]['name']; print $items[$i]['address']; print $items[$i]['city']; print $items[$i]['phone']; print "<br />"; print "<br />"; } I'm querying the database for all of the data in it, the rows being ref, name, address, city and phone, and executing it. Querying the number of rows then using that to determine how many iterations for the loop to run is all fine but what I'd like to have is for the h2 heading to appear above the for ($i=0;) line. Trying just breaks my page so that might be out of the question. I figure I'd have to count the number of entries in 'city' until it detects a change then change the heading to that name I think? That or make a heap of queries and set a variable for each name but at point, I might as well do it manually (and I highly doubt it would be best practice). Oh, and I'd welcome any critiques to my PHP since I'm just starting out. Thanks and if you need any more information, just ask! P.S. Our class is learning with PostgreSQL instead of MySQL as you can see in the tags.

    Read the article

  • Replace Components.classesByID with document.implementation.createDocument

    - by Earl Smith
    I am not the author of this code, but it is no longer maintained. So I am trying to fix it, but I have very little experience in javascript. Since Firefox 9, Components.classesByID["{3a9cd622-264d-11d4-ba06-0060b0fc76dd}"]. has been obsolete. Instead, it is suggested that document.implementation.createDocument be used. Can someone here show me how to implement these changes? I seem to be, just banging my head with everything I have tried. The example given at Mozilla developer network is: var doc = document.implementation.createDocument ("http://www.w3.org/1999/xhtml", "html", null); var body = document.createElementNS("http://www.w3.org/1999/xhtml", "body"); body.setAttribute("id", "abc"); doc.documentElement.appendChild(body); alert(doc.getElementById("abc")); // [object HTMLBodyElement] and the code in the .jsm I am trying to fix is: this.fgImageData = {}; this.fgImageData["check"] = [ " *", " **", "* ***", "** *** ", "***** ", " *** ", " * "]; this.fgImageData["radio"] = [ " **** ", "******", "******", "******", "******", " **** "]; this.fgImageData["menu-ltr"] = [ "* ", "** ", "*** ", "****", "*** ", "** ", "* "]; this.fgImageData["menu-rtl"] = [ " *", " **", " ***", "****", " ***", " **", " *"]; // I think I'm doing something slightly wrong when creating the document // but I'm not sure. It works though. *FIX* var domi = Components.classesByID["{3a9cd622-264d-11d4-ba06-0060b0fc76dd}"]. createInstance(Components.interfaces.nsIDOMDOMImplementation); this.document = domi.createDocument("http://www.w3.org/1999/xhtml", "html", null); this.canvas = this.document.createElementNS("http://www.w3.org/1999/xhtml", "html:canvas"); for(var name in this.fgImageData) { if (this.fgImageData.hasOwnProperty(name)) { var data = this.fgImageData[name]; var width = data[0].length; var height = data.length; this.canvas.width = width; this.canvas.height = height; var g = this.canvas.getContext("2d"); g.clearRect(0, 0, width, height); var idata = g.getImageData(0, 0, width, height); for(var y=0, oy=0; y<height; y++, oy+=idata.width*4) for(var x=0, ox=oy; x<width; x++, ox+=4) idata.data[ox+3] = data[y][x] == " " ? 0 : 255; this.fgImageData[name] = idata; } } },

    Read the article

  • A question about making a C# class persistent during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • codeIgniter: pass parameter to a select query from previous query

    - by krike
    I'm creating a little management tool for the browser game travian. So I select all the villages from the database and I want to display some content that's unique to each of the villages. But in order to query for those unique details I need to pass the id of the village. How should I do this? this is my code (controller): function members_area() { global $site_title; $this->load->model('membership_model'); if($this->membership_model->get_villages()) { $data['rows'] = $this->membership_model->get_villages(); $id = 1;//this should be dynamic, but how? if($this->membership_model->get_tasks($id)): $data['tasks'] = $this->membership_model->get_tasks($id); endif; } $data['title'] = $site_title." | Your account"; $data['main_content'] = 'account'; $this->load->view('template', $data); } and this is the 2 functions I'm using in the model: function get_villages() { $q = $this->db->get('villages'); if($q->num_rows() > 0) { foreach ($q->result() as $row) { $data[] = $row; } return $data; } } function get_tasks($id) { $this->db->select('name'); $this->db->from('tasks'); $this->db->where('villageid', $id); $q = $this->db->get(); if($q->num_rows() > 0) { foreach ($q->result() as $task) { $data[] = $task; } return $data; } } and of course the view: <?php foreach($rows as $r) : ?> <div class="village"> <h3><?php echo $r->name; ?></h3> <ul> <?php foreach($tasks as $task): ?> <li><?php echo $task->name; ?></li> <?php endforeach; ?> </ul> <?php echo anchor('site/add_village/'.$r->id.'', '+ add new task'); ?> </div> <?php endforeach; ?> ps: please do not remove the comment in the first block of code!

    Read the article

  • are C functions declared in <c____> headers guaranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); } EDIT: The closest I've found is this (17.4.1.2p4): Except as noted in clauses 18 through 27, the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C (Clause 7), or ISO/IEC:1990 Programming Languages—C AMENDMENT 1: C Integrity, (Clause 7), as appropriate, as if by inclusion. In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std. which to be honest I could interpret either way. "the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C" tells me that they may be required in the global namespace, but "In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std." says they are in std (but doesn't specify any other scoped they are in).

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

  • PHP echo query result in Class??

    - by Jerry
    Hi all I have a question about PHP Class. I am trying to get the result from Mysql via PHP. I would like to know if the best practice is to display the result inside the Class or store the result and handle it in html. For example, display result inside the Class class Schedule { public $currentWeek; function teamQuery($currentWeek){ $this->currentWeek=$currentWeek; } function getSchedule(){ $connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if (!$connection) { die("Database connection failed: " . mysql_error()); } $db_select = mysql_select_db(DB_NAME,$connection); if (!$db_select) { die("Database selection failed: " . mysql_error()); } $scheduleQuery=mysql_query("SELECT guest, home, time, winner, pickEnable FROM $this->currentWeek ORDER BY time", $connection); if (!$scheduleQuery){ die("database has errors: ".mysql_error()); } while($row=mysql_fetch_array($scheduleQuery, MYSQL_NUMS)){ //display the result..ex: echo $row['winner']; } mysql_close($scheduleQuery); //no returns } } Or return the query result as a variable and handle in php class Schedule { public $currentWeek; function teamQuery($currentWeek){ $this->currentWeek=$currentWeek; } function getSchedule(){ $connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if (!$connection) { die("Database connection failed: " . mysql_error()); } $db_select = mysql_select_db(DB_NAME,$connection); if (!$db_select) { die("Database selection failed: " . mysql_error()); } $scheduleQuery=mysql_query("SELECT guest, home, time, winner, pickEnable FROM $this->currentWeek ORDER BY time", $connection); if (!$scheduleQuery){ die("database has errors: ".mysql_error()); // create an array } $ret = array(); while($row=mysql_fetch_array($scheduleQuery, MYSQL_NUMS)){ $ret[]=$row; } mysql_close($scheduleQuery); return $ret; // and handle the return value in php } } Two things here: I found that returned variable in php is a little bit complex to play with since it is two dimension array. I am not sure what the best practice is and would like to ask you experts opinions. Every time I create a new method, I have to recreate the $connection variable: see below $connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if (!$connection) { die("Database connection failed: " . mysql_error()); } $db_select = mysql_select_db(DB_NAME,$connection); if (!$db_select) { die("Database selection failed: " . mysql_error()); } It seems like redundant to me. Can I only do it once instead of calling it anytime I need a query? I am new to php class. hope you guys can help me. thanks.

    Read the article

  • Truth tables in code? How to structure state machine?

    - by HanClinto
    I have a (somewhat) large truth table / state machine that I need to implement in my code (embedded C). I anticipate the behavior specification of this state machine to change in the future, and so I'd like to keep this easily modifiable in the future. My truth table has 4 inputs and 4 outputs. I have it all in an Excel spreadsheet, and if I could just paste that into my code with a little formatting, that would be ideal. I was thinking I would like to access my truth table like so: u8 newState[] = decisionTable[input1][input2][input3][input4]; And then I could access the output values with: setOutputPin( LINE_0, newState[0] ); setOutputPin( LINE_1, newState[1] ); setOutputPin( LINE_2, newState[2] ); setOutputPin( LINE_3, newState[3] ); But in order to get that, it looks like I would have to do a fairly confusing table like so: static u8 decisionTable[][][][][] = {{{{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}, {{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}}, {{{ 0, 0, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}, {{{{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}, {{{ 0, 1, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}; Those nested brackets can be somewhat confusing -- does anyone have a better idea for how I can keep a pretty looking table in my code? Thanks! Edit based on HUAGHAGUAH's answer: Using an amalgamation of everyone's input (thanks -- I wish I could "accept" 3 or 4 of these answers), I think I'm going to try it as a two dimensional array. I'll index into my array using a small bit-shifting macro: #define SM_INPUTS( in0, in1, in2, in3 ) ((in0 << 0) | (in1 << 1) | (in2 << 2) | (in3 << 3)) And that will let my truth table array look like this: static u8 decisionTable[][] = { { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }}; And I can then access my truth table like so: decisionTable[ SM_INPUTS( line1, line2, line3, line4 ) ] I'll give that a shot and see how it works out. I'll also be replacing the 0's and 1's with more helpful #defines that express what each state means, along with /**/ comments that explain the inputs for each line of outputs. Thanks for the help, everyone!

    Read the article

  • Unit Testing - Am I doing it right?

    - by baron
    Hi everyone, Basically I have been programing for a little while and after finishing my last project can fully understand how much easier it would have been if I'd have done TDD. I guess I'm still not doing it strictly as I am still writing code then writing a test for it, I don't quite get how the test becomes before the code if you don't know what structures and how your storing data etc... but anyway... Kind of hard to explain but basically lets say for example I have a Fruit objects with properties like id, color and cost. (All stored in textfile ignore completely any database logic etc) FruitID FruitName FruitColor FruitCost 1 Apple Red 1.2 2 Apple Green 1.4 3 Apple HalfHalf 1.5 This is all just for example. But lets say I have this is a collection of Fruit (it's a List<Fruit>) objects in this structure. And my logic will say to reorder the fruitids in the collection if a fruit is deleted (this is just how the solution needs to be). E.g. if 1 is deleted, object 2 takes fruit id 1, object 3 takes fruit id2. Now I want to test the code ive written which does the reordering, etc. How can I set this up to do the test? Here is where I've got so far. Basically I have fruitManager class with all the methods, like deletefruit, etc. It has the list usually but Ive changed hte method to test it so that it accepts a list, and the info on the fruit to delete, then returns the list. Unit-testing wise: Am I basically doing this the right way, or have I got the wrong idea? and then I test deleting different valued objects / datasets to ensure method is working properly. [Test] public void DeleteFruit() { var fruitList = CreateFruitList(); var fm = new FruitManager(); var resultList = fm.DeleteFruitTest("Apple", 2, fruitList); //Assert that fruitobject with x properties is not in list ? how } private static List<Fruit> CreateFruitList() { //Build test data var f01 = new Fruit {Name = "Apple",Id = 1, etc...}; var f02 = new Fruit {Name = "Apple",Id = 2, etc...}; var f03 = new Fruit {Name = "Apple",Id = 3, etc...}; var fruitList = new List<Fruit> {f01, f02, f03}; return fruitList; }

    Read the article

  • Keeping Velocity Constant and Player in Position - Sidescrolling

    - by user2904951
    I'm working on a Little Mobile Game with Cocos2D-X and Box2D. The Point where I got stuck is the movement of a box2d-body (the main actor) and the according Sprite. Now I want to : move this Body with a constant velocity along the x-axis, no matter if it's rolling (it's a circleshape) upwards or downwards keep the body nearly sticking to the ground on which it's rolling keep the Body and the according Sprite in the Center of the Screen. What I tried : in the update()- method I used body->SetLinearVelocity(b2Vec2(x,y)) to higher/lower values, if the Body was passing a constant value for his velocity I used to set very high y-Values in body->SetLinearVelocity(b2Vec2(x,y)) First tried to use CCFollow with my playerSprite, which was also Scrolling along the y-axis, as i only need to scroll along the x-axis, so I decided to move the whole layer which is containing the ambience (platforms etc.) to the left of my Screen and my Player Body & Player sprite to the right of the Screen, adjusting the speed values to Keep the Player in the Center of the Screen. Well... ...didn't work as i wanted it to, because each time i set the velocity manually (I also tried to use body->applyLinearImpulse(...) when the Body is moving upwards just as playing around with the value of velocityIterations in world->Step(...)) there's a small delay, which pushes the player Body more or less further of the Center of the Screen. ... didn't also work as I expected it to, because I needed to adjust the x-Values, when the Body was moving upwards to Keep it not getting slowed down, this made my Body even less sticky to the ground.... ... CCFollow did a good Job, except that I didn't want to scroll along the y-axis also and it Forces the overgiven sprite to start in the Center of the Screen. Moving the whole Layer even brought no good results, I have tried a Long time to adjust values of the movement Speed of the layer and the Body to Keep it negating each other, that the player stays nearly in the Center of the Screen.... So my question is : Does anyone of you have any Kind of new Approach for me to solve this cohesive bunch of Problems ? Cheers, Seb

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >