Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 526/751 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • What to do when you inherit an unmaintainable codebase?

    - by GordonM
    I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Opengl-es picking object

    - by lacas
    I saw a lot of picking code opengl-es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials/forums) Vec3 far = Camera.getPosition(); Vec3 near = Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0); Vec3 direction = far.sub(near); direction.normalize(); Log.e("direction", direction.x+" "+direction.y+" "+direction.z); Ray mouseRay = new Ray(near, direction); for (int n=0; n<ObjectFactory.objects.size(); n++) { if (ObjectFactory.objects.get(n)!=null) { IObject obj = ObjectFactory.objects.get(n); float discriminant, b; float radius=0.1f; b = -mouseRay.getOrigin().dot(mouseRay.getDirection()); discriminant = b * b - mouseRay.getOrigin().dot(mouseRay.getOrigin()) + radius*radius; discriminant = FloatMath.sqrt(discriminant); double x1 = b - discriminant; double x2 = b + discriminant; Log.e("asd", obj.getName() + " "+discriminant+" "+x1+" "+x2); } } my camera vectors: //cam Vec3 position =new Vec3(-obj.getPosX()+x, obj.getPosZ()-0.3f, obj.getPosY()+z); Vec3 direction =new Vec3(-obj.getPosX(), obj.getPosZ(), obj.getPosY()); Vec3 up =new Vec3(0.0f, -1.0f, 0.0f); Camera.set(position, direction, up); and my picking code: public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) { int[] viewport = getViewport(); float[] modelview = getModelView(); float[] projection = getProjection(); float winX, winY; float[] position = new float[4]; winX = (float)mouseX; winY = (float)Shared.screen.width - (float)mouseY; GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0); return new Vec3(position[0], position[1], position[2]); } My camera moving all the time in 3d space. and my actors/modells moving too. my camera is following one actor/modell and the user can move the camera on a circle on this model. How can I change the above code to working?

    Read the article

  • Play the Microsoft Game “Are You Certifiable?”

    - by Mysticgeek
    Want to know if you have what it takes to be certified by Microsoft? Today we check out an enjoyable way to practice and test your IT knowledge of Microsoft products.  There are two modes, one where you log in with your Live account so you can save your progress, and play additional levels.   If you log in with your Live account, it’s obvious that Microsoft wants to sell you some certification courses, so just be aware of that. Or Guest Play where you can only play one episode and scores are not saved.   Playing the Game We’ll take a look at the Guest Play just so you get a sense of what the game is about. Enter in a username and pick an avatar… Then read the instructions…we won’t go over them all here, there are a lot of options and points are scored by correct answers, amount of time it takes to answer them, you get vouchers to play a question before answers are shown…etc. Once you start playing, you get certification questions, you can take as much time to read the question as you want, then hit the Answer button when you’re ready. Now you have four answers to choose from…notice the time clicking down, so you want to try to answer as quickly as possible. After selecting the answer, you’re told if it is correct or not, then given an answer explaination, along with your score. You can flag the topic so it comes up again, which is a good way to get repetition of various topics, which really helps when taking the cert tests. If you get an answer wrong, you still get an answer explanation which is cool, so you can learn and better understand the topic. Conclusion This game is definitely not for everyone, only those who are curious or want a fun way to practice for Microsoft certifications. If you are interested in a cert from Microsoft, it’s a fun way to practice up. Play Are You Certifiable? Similar Articles Productive Geek Tips Geek Fun: Play Alien Arena the Free FPS GameFriday Fun: Get Your Mario OnFriday Fun: Play Bubble QuodFriday Fun: 13 Days in HellFriday Fun: Open Doors TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • Which programming language to get into?

    - by user602479
    I'm ending my third term in a few weeks so I have some spare time coming up. I'd like to spend it seriously digging into programming. My problem: I'm not sure which language to begin with. Just to be clear, I don't want to start a language-y-compared-to-language-z discussion. There are a some other issues that play a major role. In my 5th term I'm going to be participating in a major practical course which will include either Java or C programming. It will take a lot of time and energy, as I found out while talking to a few students who passed the final exams (only 15% pass on their first try). Which practical course I will take is randomly decided. My skills so far are the absolute basics of Java and C programming. I know the different data types and how to handle them, objects, pointers, thread programming, etc. All of that is on a very low level, though. My question now is, what language should I start seriously practicing? Java: I did my first GUIs with this language. I'm familiar with Eclipse but I need a project to work on (which I don't have) to really keep me pushing. Besides that, I don't think it would help me if I have to do C in a year. C: As with Java, I can't think of a personal project to keep me working and keep me interested in programming. If I get assigned to Java in a year, this wouldn't give me any advantages either, would it? (No objects, etc.) Objective-C: I recently came up with this idea. I have a Mac; I'm not really familiar with Xcode but I have one or two personal projects I'd like to work on. Further, I would be working with objects (as in Java) and C language constructs which would both be great for this practical course in a year. What do you think I should begin with? Should I just stick to Java and hope for the best, force myself through C or start (nearly) completely from the beginning with Objective C? Maybe you folks could give me some good advice that would stop me from switching from one language to the next?

    Read the article

  • Fail to start windows after Ubuntu 11.10 install

    - by user49995
    Computer: HP Pavilion dv7-6140eo OS: Originally Win7 I recently decided to try out Ubuntu, and I decided to dual-boot it with Windows 7. First I googled some how-to's, then I downloaded Ubuntu onto a memory stick and made a second partition (I originally only had one partition that I shrunk and used the unallocated space to install onto during the Ubuntu install). During the install I set format type to xt4 (or something, it was the default option), chose the "in the beginning" option and set the last option as "\". The install was successful. Although, when I restarted my computer I weren't able to choose which operating system to start; it went right into windows. After showing the windows logo for half a second before rebooting, I get a blue screen (see bottom of the page). Trying to fix it, I deleted the newly made partition I had just installed Ubuntu onto (seeing it wasn't working either). This made no difference. I proceeded with installing Ubuntu again, so I would at least have a functioning computer, and now Ubuntu works fine (on it now). The only difference on start-up is that I get a Grub window asking me to between several options including Linux and Windows 7 (loader). Now, if I choose Windows 7, I get the message "Windows was unable to start. A recent software or hardware change might be the cause". It recommends me to choose the first option of the two it provides; to start start-up repair tool. The second option being starting windows normally. If I start windows normally, the same thing happens as earlier. My computer does not have a windows installation CD. Although, it has (at least it used to, if I haven't screwed that too up) a 17gb recovery partition. In addition I made an image of the computer onto a external hard drive when I first got it. Though, I have no idea how to use either. If anyone has any idea how I can make windows work again or reinstall it (already backed up my files) it would be greatly appreciated. I still prefer to dual boot between the two functioning operating systems, but I will settle for a functioning windows 7. Thanks a lot for any replies. Blue screen: A problem has been detected and Windows has been shut down to prevent damage to your computer. If this is the first time you've seen this Stop error screen, restart your computer. If this screen appears again, follow these steps: Check for viruses on your computer. Remove and newly installed hard drives or hard drive controllers. Check your hard drive to make sure it is properly configures and terminated. Run CMKDSK /F to check for hard drive corruption, and then restart your computer. Technical information: **STOP: 0x0000007B (0xFFFFF880009A97E8,0xFFFFFFFFC0000034, 0x0000000000000000,0x0000000000000000

    Read the article

  • My Doors - Why Standards Matter to Business

    - by [email protected]
    By Brian Dayton on April 8, 2010 9:27 PM "Standards save money." "Standards accelerate projects." "Standards make better solutions." What do these statements mean to you? You buy technology solutions like Oracle Applications but you're a business person--trying to close the quarter, get performance reviews processed, negotiate a new sourcing contract, etc. When "standards" come up in presentations and discussions do you: - Nod your head politely - Tune out and check your smart phone - Turn to your IT counterpart and say "Bob's all over this standards thing, right Bob?" Here's why standards matter. My wife wants new external doors downstairs, ones that would get more light into the rooms. Am I OK with that? "Uhh, sure...it's a little dark in the kitchen." - 24 hours ago - wife calls to tell me that she's going to the hardware store and may look at doors - 20 hours ago - wife pulls into driveway, informs me that two doors are in the back of her station wagon, ready for me to carry - 19 hours ago - I re-discovered the fact that it's not fun to carry a solid wood door by myself - 5 hours ago - Local handyman, who was at our house anyway, tells me that the doors we bought will likely cost 2-3x the material cost in installation time and labor...the doors are standard but our doorways aren't We could have done more research. I could be more handy. Sure. But the fact is, my 1951 house wasn't built with me in mind. They built what worked and called it a day. The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different. If you're going to use your processes and technology to differentiate your business you should have at least a working knowledge of: - How standards can benefit your business - Your IT organization's philosophy around standards - Your vendor's track-record around standards...and watch for those who pay lip-service to standards but don't follow through The rallying cry in most IT organizations today is "learn more about the business, drop the acronyms." I'm not advocating that you go out and learn how to code in Java. But I do believe it will help your business and your decision-making process if you meet IT ½...even ¼ of the way there. Epilogue: The door project has been put on hold and yours truly has to return the doors to the hardware store tomorrow.

    Read the article

  • Zenoss Setup for Windows Servers

    - by Jay Fox
    Recently I was saddled with standing up Zenoss for our enterprise.  We're running about 1200 servers, so manually touching each box was not an option.  We use LANDesk for a lot of automated installs and patching - more about that later.The steps below may not necessarily have to be completed in this order - it's just the way I did it.STEP ONE:Setup a standard AD user.  We want to do this so there's minimal security exposure.  Call the account what ever you want "domain/zenoss" for our examples.***********************************************************STEP TWO:Make the following local groups accessible by your zenoss account.Distributed COM UsersPerformance Monitor UsersEvent Log Readers (which doesn't exist on pre-2008 machines)Here's the Powershell script I used to setup access to these local groups:# Created to add Active Directory account to local groups# Must be run from elevated prompt, with permissions on the remote machine(s).# Create txt file should contain the names of the machines that need the account added, one per line.# Script will process machines line by line.foreach($i in (gc c:\tmp\computers.txt)){# Add the user to the first group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Distributed COM Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the second group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Performance Monitor Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the third group - Group doesn't exist on < Server 2008#$objUser=[ADSI]("WinNT://domain/zenoss")#$objGroup=[ADSI]("WinNT://$i/Event Log Readers")#$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)}**********************************************************STEP THREE:Setup security on the machines namespace so our domain/zenoss account can access itThe default namespace for zenoss is:  root/cimv2Here's the Powershell script:#Grant account defined below (line 11) access to WMI Namespace#Has to be run as account with permissions on remote machinefunction get-sid{Param ($DSIdentity)$ID = new-object System.Security.Principal.NTAccount($DSIdentity)return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()}$sid = get-sid "domain\zenoss"$SDDL = "A;;CCWP;;;$sid" $DCOMSDDL = "A;;CCDCRP;;;$sid"$computers = Get-Content "c:\tmp\computers.txt"foreach ($strcomputer in $computers){    $Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"    $DCOM = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction").uValue    $security = Get-WmiObject -ComputerName $strcomputer -Namespace root/cimv2 -Class __SystemSecurity    $converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper    $binarySD = @($null)    $result = $security.PsBase.InvokeMethod("GetSD",$binarySD)    $outsddl = $converter.BinarySDToSDDL($binarySD[0])    $outDCOMSDDL = $converter.BinarySDToSDDL($DCOM)    $newSDDL = $outsddl.SDDL += "(" + $SDDL + ")"    $newDCOMSDDL = $outDCOMSDDL.SDDL += "(" + $DCOMSDDL + ")"    $WMIbinarySD = $converter.SDDLToBinarySD($newSDDL)    $WMIconvertedPermissions = ,$WMIbinarySD.BinarySD    $DCOMbinarySD = $converter.SDDLToBinarySD($newDCOMSDDL)    $DCOMconvertedPermissions = ,$DCOMbinarySD.BinarySD    $result = $security.PsBase.InvokeMethod("SetSD",$WMIconvertedPermissions)     $result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction", $DCOMbinarySD.binarySD)}***********************************************************STEP FOUR:Get the SID for our zenoss account.Powershell#Provide AD User get SID$objUser = New-Object System.Security.Principal.NTAccount("domain", "zenoss") $strSID = $objUser.Translate([System.Security.Principal.SecurityIdentifier]) $strSID.Value******************************************************************STEP FIVE:Modify the Service Control Manager to allow access to the zenoss AD account.This command can be run from an elevated command line, or through Powershellsc sdset scmanager "D:(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CCLCRPRC;;;PUT_YOUR_SID_HERE_FROM STEP_FOUR)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)"******************************************************************In step two the script plows through a txt file that processes each computer listed on each line.  For the other scripts I ran them on each machine using LANDesk.  You can probably edit those scripts to process a text file as well.That's what got me off the ground monitoring the machines using Zenoss.  Hopefully this is helpful for you.  Watch the line breaks when copy the scripts.

    Read the article

  • Function like C# properties?

    - by alan2here
    I was directed here from SO as a better stack exchange site for this question. I've been thinking about the neatness and expression of C# properties over functions, although they only currently work where no parameters are used, and wondered. Is is possible, and if so why not, to have a stand alone function like C# property. For example: public class test { private byte n = 4; public test() { func = 2; byte n2 = func; func; } private byte func { get { return n; } set { n = value; } func { n++; } } } edit: Sorry for the vagueness first time round. I'm going to add some info and motivation. The 'n++' here is just a simple example, a placeholder, it's not intended to be representative of the actual code that would be used. I'm also looking at this from the point of view of looking at the property command as is, not in the context of using it for 'get_xyz' and 'set_xyz' member functions, which is certainly useful, but of instead comparing it more abstractly to functions and other programic elements. A 'get' property can be used instead of a function that takes no parameters, and syntactically they are perhaps only aesthetically, but as I see it noticeably nicer. However, properties also add the potential for an extra layer of polymorphism, one that relates to the 'func = 4;' getting, 'int n = func;' setting or 'func;' function like context in which they are used as well as the more common parameter based polymorphism. Potentially allowing for a lot of expression and contextual information reguarding how other would use your functions. As in many places uses and definitions would remain the same, it shouldn't break existing code. private byte func { get { } get bool { } set { } func { } func(bool) { } func(byte, myType) { } // etc... } So a read only function would look like this: private byte func { get { } } A normal function like this: private void func { func { } } A function with parameter polymorphism like this: private byte func { func(bool) { } func(byte, myType) { } } And a function that could return a value, or just compute, depending on the context it is used, that also has more conventional parameter polymorphism as well, like so: private byte func { get { } func(bool) { } func(byte, myType) { } }

    Read the article

  • Nifty popup fails to register

    - by Snailer
    I'm new to Nifty GUI, so I'm following a tutorial here for making popups. For now, I'm just trying to get a very basic "test" popup to show, but I get multiple errors and none of them make much sense. To show a popup, I believe it is necessary to first have a Nifty Screen already showing, which I do. So here is the ScreenController for the working Nifty Screen: public class WorkingScreen extends AbstractAppState implements ScreenController { //Main is my jme SimpleApplication private Main app; private Nifty nifty; private Screen screen; public WorkingScreen() {} public void equip(String slotstr) { int slot = Integer.valueOf(slotstr); System.out.println("Equipping item in slot "+slot); //Here's where it STOPS working. app.getPlayer().registerPopupScreen(nifty); System.out.println("Registered new popup"); Element ele = nifty.createPopup(app.getPlayer().POPUP); System.out.println("popup is " +ele); nifty.showPopup(nifty.getCurrentScreen(), ele.getId(), null); } @Override public void initialize(AppStateManager stateManager, Application app) { super.initialize(stateManager, app); this.app = (Main)app; } @Override public void update(float tpf) { /** jME update loop! */ } public void bind(Nifty nifty, Screen screen) { this.nifty = nifty; this.screen = screen; } When I call equip(0) the system prints Equipping item in slot 0, then a lot of errors and none of the subsequent println()'s. Clearly it botches somewhere in Player.registerPopupScreen(Nifty nifty). Here's the method: public final String POPUP = "Test Popup"; public void registerPopupScreen(Nifty nifty) { System.out.println("Attempting new popup"); PopupBuilder b = new PopupBuilder(POPUP) {{ childLayoutCenter(); backgroundColor("#000a"); panel(new PanelBuilder() {{ id("List"); childLayoutCenter(); height(percentage(75)); width(percentage(50)); control(new ButtonBuilder("TestButton") {{ label("TestButton"); width("120px"); height("40px"); align(Align.Center); }}); }}); }}; System.out.println("PopupBuilder success."); b.registerPopup(nifty); System.out.println("Registerpopup success."); } Because that first println() doesn't show, it looks like this method isn't even called at all! Edit After removing all calls on the Player object, the popup works. It seems I'm not "allowed" to access the player from the ScreenController. Unfortunate, since I need information on the player for the popup. Is there a workaround?

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • Ask the Readers: Which Browser is a Must-Have for You on Linux?

    - by Asian Angel
    Linux systems all come with their own particular set of default browsers, but those browsers may not be the ones you want or need. This week we would like to know which browser (or browsers) are considered “must-have” on your Linux systems. As a general rule many Linux distributions have Firefox and/or Konqueror as one of the default installation browsers. During this past year the open source browser Chromium has also been gaining a lot of traction as a default install for systems. For most people these browsers are the ones that they like best or feel work well enough to not make any changes. But there are other people who want more than what is available with a default system install. They may favor a particular browser for its’ extensibility or speed…others prefer a particular browser for its’ features or minimalist UI. Whatever your preferences may be, there is a browser out there to fit your style. Some people may even prefer to run only bleeding edge nightly releases or add them in with their current browsers. The important part is that you have choices when it comes to your Linux system. What we would like to know this week is which browser or browsers you make sure are always installed on your Linux systems. Does the Linux system you use already have your favorite browser installed as part of the default set? Maybe you are content with using the default set of browsers that come with the system. Or perhaps you prefer to rework the entire browser setup on your system by removing the defaults and adding your favorites. Let us know which browsers you consider “must have” and why in the comments! Note: You can make up to two selections on today’s poll since most people will likely have more than one browser that they make certain is always installed. How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC BotSync Enables Secure FTP File Synchronization on Android Devices Enjoy Beautiful City Views with the Cityscape Theme for Windows 7 Luigi Installs Any OS on Google’s Cr-48 Notebook DIY iPad Stylus Offers Pen-Based Interaction on the Cheap Serene Blue Ubuntu Wallpaper for Your Desktop Enjoy Old School Style Video Game Fun with Chicken Invaders

    Read the article

  • Feedback from SQLBits 8

    - by Peter Larsson
    This years SQLBits occurred in Brighton. Although I didn’t have the opportunity to attend the full conference, I did a presentation at Saturday. Getting to Brighton was easy. Drove to Copenhagen airport at 0415, flew 0605 and arrived at Gatwick 0735. Then I took the direct train to Brighton and showed up at 0830, just one hour before presenting. This was the easy part. Getting home was much worse. Presentation ended at 1030 and I had to rush to the train station to get back to London, change to tube for Heathrow. Made it at the gate just 15 seconds before closing. That included a half mile run in the airport… Anyway, yesterday I got the feedback for my presentation. It does look good, especially since English is not my first language. This is the first graph Seems to be just halfway between conference average and best session. I can live with that. Second graph shows more detail about attendees voting. It also look acceptable. A wider spread for the 9’s, but it is an inevitable effect from how attendees percept the session. I did get a lot of 8’s and the lower grades in an descending order. The two people voting 4 and 5 didn’t say why they voted this so I don’t know how to remedy this. Third graph is about each category of votes.   Again, I find this acceptable. The Session abstract and Speaker’s knowledge seems to follow attendees expectations compared to conference average. I seem to have met the attendees expectations (and some more) for the other four categories, also compared to conference average. Since this did encourage me, I believe I will present some more at future meetings. I do have a new presentation about something all developers are doing every day but they may not know it. I will also cover this new topic in the next Deep Dives II book. Stay tuned! //Peter

    Read the article

  • How to run software, that is not offered though package managers, that requires ia32-libs

    - by Onno
    I'm trying to install the Arma 2 OA dedicated server on a Virtualbox VM so I can test my own missions in a sandbox environment in a way that lets me offload them to another computer in my network. (The other computer is running the VM, but it's a windows machine, and I didn't want to hassle with its installation) It needs at least 2, and preferably 4GB of ram, so I thought I would install the AMD64 version of ubuntu 13.10 to get this going. 'How do you run a 32-bit program on a 64-bit version of Ubuntu?' already explained how to install 32bit software though apt-get and/or dpkg, but that doesn't apply in this case. The server is offered as a compressed download on the site of BI Studio, the developer of the Arma games. Its installation instructions are obviously slightly out of date with the current state of the art. (probably because the state of the art has been updated quite recently :) ) It states that I have to install ia32-libs, which has now apparently been deprecated. Now I have to find out how to get the right packages installed to make sure that it will run. My experience level is like novice-intermediate when it comes to these issues. I've installed a lot of packages though apt-get; I've solved dependency issues in the past; I haven't installed much software without using package managers. I can handle myself with basic administrative work like editing conf files and such. I have just gone ahead and tried to install it without installing ia32-libs through apt-get but to install gcc to get the libs after all. My reasoning being that gcc will include the files for backward compatibility coding and on linux all libs are (as far as I can tell) installed at a system level in /libs . So far it seems to start up. (I can connect with the game server trough my in-game network browser, so it's communicating) I'm not sure if there's any dependency checking going on when running the game server program, so I'm left with a couple of questions: Does 13.10 catch any calls to ia32libs libraries and translate the calls to the right code on amd64? If it runs, does that mean that all required libraries have been loaded correctly, or is there a change of it crashing later on when a library that was needed is missing after all? Is it necessary to do a workaround such as installing gcc? How do I find out what libraries I might need to run this software? (or any other piece of 32-bit software that isn't offered through a package manager)

    Read the article

  • How can I optimize Apache to use 1GB of RAM on my website? [closed]

    - by Markon
    My VPS plan gives me 1GB of RAM burstable to 2GB. Of course I cannot use 2 GB, nor 1 GB, everyday, so I'm planning to optimize the performance of my webserver. The average of hits-per-hour is about 8'000-10'000. This means about 2 connections-per-second. Max hits-per-hour reached until now is about 60'000. That means about 16 connections-per-second. Unluckily my current apache configuration uses too much memory (when there are not connected clients - usually during the night - it uses about 1GB) so I've tried to customize the apache installation to fit to my needs. I'm using Ubuntu, kernel 2.6.18, with apache2-mpm-worker, since I've read it requires less memory, and fcgid ( + PHP). This is my /etc/apache2/apache2.conf: Timeout 45 KeepAlive on MaxKeepAliveRequests 100 KeepAliveTimeout 10 <IfModule mpm_worker_module> StartServer 2 MinSpareThreads 25 MaxSpareThreads 75 MaxClients 100 MaxRequestsPerChild 0 </IfModule> This is the output of ps aux: www-data 9547 0.0 0.3 423828 7268 ? Sl 20:09 0:00 /usr/sbin/apache2 -k start root 17714 0.0 0.1 76496 3712 ? Ss Feb05 0:00 /usr/sbin/apache2 -k start www-data 17716 0.0 0.0 75560 2048 ? S Feb05 0:00 /usr/sbin/apache2 -k start www-data 17746 0.0 0.1 76228 2384 ? S Feb05 0:00 /usr/sbin/apache2 -k start www-data 20126 0.0 0.3 424852 7588 ? Sl 19:24 0:02 /usr/sbin/apache2 -k start www-data 24260 0.0 0.3 424852 7580 ? Sl 19:42 0:01 /usr/sbin/apache2 -k start while this is ps aux for php5: www-data 7461 2.9 2.2 142172 47048 ? S 19:39 1:39 /usr/lib/cgi-bin/php5 www-data 23845 1.3 1.7 135744 35948 ? S 20:17 0:15 /usr/lib/cgi-bin/php5 www-data 23900 2.0 1.7 136692 36760 ? S 20:17 0:22 /usr/lib/cgi-bin/php5 www-data 27907 2.0 2.0 142272 43432 ? S 20:00 0:43 /usr/lib/cgi-bin/php5 www-data 27909 2.5 1.9 138092 40036 ? S 20:00 0:53 /usr/lib/cgi-bin/php5 www-data 27993 2.4 2.2 142336 47192 ? S 20:01 0:50 /usr/lib/cgi-bin/php5 www-data 27999 1.8 1.4 135932 31100 ? S 20:01 0:38 /usr/lib/cgi-bin/php5 www-data 28230 2.6 1.9 143436 39956 ? S 20:01 0:54 /usr/lib/cgi-bin/php5 www-data 30708 3.1 2.2 142508 46528 ? S 19:44 1:38 /usr/lib/cgi-bin/php5 As you can see it use a lot of memory. How can I reduce it to fit to just 1GB of RAM? PS: I also think about the switch to nginx, if Apache can't fit to my needs...

    Read the article

  • C Minishell Command Expansion Printing Gibberish

    - by Optimus_Pwn
    I'm writing a unix minishell in C, and am at the point where I'm adding command expansion. What I mean by this is that I can nest commands in other commands, for example: $> echo hello $(echo world! ... $(echo and stuff)) hello world! ... and stuff I think I have it working mostly, however it isn't marking the end of the expanded string correctly, for example if I do: $> echo a $(echo b $(echo c)) a b c $> echo d $(echo e) d e c See it prints the c, even though I didn't ask it to. Here is my code: msh.c - http://pastebin.com/sd6DZYwB expand.c - http://pastebin.com/uLqvFGPw I have a more code, but there's a lot of it, and these are the parts that I'm having trouble with at the moment. I'll try to tell you the basic way I'm doing this. Main is in msh.c, here it gets a line of input from either the commandline or a shellfile, and then calls processline (char *line, int outFD, int waitFlag), where line is the line we just got, outFD is the file descriptor of the output file, and waitFlag tells us whether or not we should wait if we fork. When we call this from main we do it like this: processline (buffer, 1, 1); In processline, we allocate a new line: char expanded_line[EXPANDEDLEN]; We then call expand, in expand.c: expand(line, expanded_line, EXPANDEDLEN); In expand, we copy the characters literally from line to expanded_line until we find a $(, which then calls: static int expCmdOutput(char *orig, char *new, int *oldl_ind, int *newl_ind) orig is line, and new is expanded line. oldl_ind and newl_ind are the current positions in the line and expanded line, respectively. Then we pipe, and recursively call processline, passing it the nested command(for example, if we had "echo a $(echo b)", we would pass processline "echo b"). This is where I get confused, each time expand is called, is it allocating a new chunk of memory EXPANDEDLEN long? If so, this is bad because I'll run out of stack room really quickly(in the case of a hugely nested commandline input). In expand I insert a null character at the end of the expanded string, so why is it printing past it? If you guys need any more code, or explanations, just ask. Secondly, I put the code in pastebin because there's a ton of it, and in my experience people don't like it when I fill up several pages with code. Thanks.

    Read the article

  • Nautilus ignores / misinterprets view size

    - by BlueZero4
    I noticed that a lot of my folders had suddenly switched to higher view sizes than I had specificied. I was assuming that somehow nautilus had suddenly decided to create per-folder entries for said folders with incorrect view sizes. So I found this question: How to reset all per-folder view settings in nautilus? I found the folder specified in the answer (~/.local/share/gvfs-metadata) and found that it was actually important to delete the files INSIDE the folder, because for some reason deleting the folder itself didn't work for some reason. After doing that, I discovered that the odd setting was for the default view settings, not for a handful of files. Nautilus actually handles the per-folder settings like it should, but it ignores the global folder settings. I want Nautilus to, by default, display all non-specified folders as compact view, 50%. My folders are using the compact setting like I want, but they are not down to 50%. At a guess, they are at 100%. Altering the view size of the icon view can set the compact view to 33%, but I'm not sure by what mechanism this functions. I haven't extensively tested the other view sizes because I don't plan on using them much at all. Next I looked up questions like How do I reset nautilus to the default configuration? I'm expecting the problem to be a corrupted config file or something of the sort, so I hunted down directories like ~/.nautilus, ~/.gconf/apps/nautilus, and ~/.gnome2/nautilus. (I don't have a ~/.nautilus directory, so I'm assuming that's only for older versions.) I attempted to remove the contents of each, but I can't seem to force Nautilus back to default configuration settings. Actually viewing Nautilus's preferences in GConf made the settings look like they were what I wanted them to be, which is odd. I'd like to force Nautilus to default settings, basically. Though if something else will fix it, I'll take it too. I'm not interested in doing a full uninstall, reinstall of Nautilus if I don't have to. ==EDIT1== Turns out that Nautilus just writes the settings in GConf for the heck of it. Nautilus only really uses the settings that it stores in DConf. I did gsettings reset-recursively org.gnome.nautilus, which actually did reset Nautilus to default, but it still doesn't like my view size settings.

    Read the article

  • Is my work on a developer test being taken advantage of?

    - by CodeWarrior
    I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with a developer test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The developer test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now, and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of developer tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their developer test to field some of the functionality, that they don't have time for in their normal team, on the cheap. Also, I wouldn't mind a 'devtest' tag.

    Read the article

  • To make or not to make...python-nautilus a dependency?

    - by George Edison
    That is the question! Okay, all silliness aside, I really am forced to make a difficult decision here. My application is written in C++ and allows other scripts to invoke methods via XML-RPC. One of these scripts is a Nautilus extension written in Python. The extension is packaged with the rest of the application and copied to the appropriate place when installed (/usr/share/nautilus-python/extensions). Now the problem is that the Nautilus extension requires the python-nautilus package to be installed to be operational. So therefore I have three options: Make the python-nautilus package a dependency. This option will ensure that anyone who installs my package will be able to use the Nautilus extension. However, this option will not be attractive to XFCE or KDE users - a ton of python-nautilus's dependencies will be installed on their machines and take up a lot of space - even if they never use Nautilus. Put the python-nautilus package in the suggests: or recommends: field. This option provides the end-user with a way to avoid installing the python-nautilus package (by providing the --no-install-suggests or --no-install-recommends argument to apt-get). However, this won't work when the user installs the package in the Software Center. (I always get mixed up as to which of those two fields are installed by default.) Prompt the user when the application is installed or first launched. This option is more complicated than the others but offers the best compromise between making it easy for the user to install python-nautilus (without going into a technical explanation) and not installing it when the user doesn't need it (or want it). I guess the best way to implement this is a simple prompt that invokes apt-get if the user would like the package installed. Don't install the package at all. This option ensures that nobody has python-nautilus installed on their machine unless they want it. However, this also means that my Nautilus extension will simply not run on the end-user's machine unless they manually install the package. Which of these options seems the best choice? Have I missed any pros and cons for each of the options?

    Read the article

  • AWS EC2 Oracle RDB - Storing and managing my data

    - by llaszews
    When create an Oracle Database on the Amazon cloud you will need to store you database files somewhere on the EC2 cloud. There are basically three places where database files can be stored: 1. Local drive - This is the local drive that is part of the virtual server EC2 instance. 2. Elastic Block Storage (EBS) - Network attached storage that appears as a local drive. 3. Simple Storage Server (S3) - 'Storage for the Internet'. S3 is not high speed and intended for store static document type files. S3 can also be used for storing static web page files. Local drives are ephemeral so not appropriate to be used as a database storage device. The leaves EBS which is the best place to store database files. EBS volumes appear as local disk drives. They are actually network-attached to an Amazon EC2 instance. In addition, EBS persists independently from the running life of a single Amazon EC2 instance. If you use an EBS backed instance for your database data, it will remain available after reboot but not after terminate. In many cases you would not need to terminate your instance but only stop it, which is equivalent of shutdown. In order to save your database data before you terminate an instance, you can snapshot the EBS to S3. Using EBS as a data store you can move your Oracle data files from one instance to another. This allows you to move your database from one region or or zone to another. Unfortunately, to scale out your Oracle RDS on AWS you can not have read only replicas. This is only possible with the other Oracle relational database - MySQL. The free micro instances use EBS as its storage. This is a very good white paper that has more details: AWS Storage Options This white paper also discusses: SQS, SimpleDB, and Amazon RDS in the context of storage devices. However, these are not storage devices you would use to store an Oracle database. This slide deck discusses a lot of information that is in the white paper: AWS Storage Options slideshow

    Read the article

  • Problems uploading package to launchpad

    - by user74513
    I'm having a lot of problems uploading my showdown project to a PPA. I've setup correctly PGP keys and my public ssh key to launchpad. I've packaged with debuild my C++ project, producing a source package lintian gave me only those two warnings that I think are ok for the showdown rules: W: massren source: native-package-with-dash-version W: massren source: binary-nmu-debian-revision-in-source 1.0-0extras12.04.1~ppa2 Producing a binary package works to and the package installs without problem on my ubuntu 12.04 machine, I only have a few more lintian warnings about the fact I'm installing in /opt/extras.ubuntu.com/ I'm uploading with: dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes When I upload with dput I have no errors, signatures seems ok, and public key seems accepted to (since the upload goes on without asking passwords...): dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes Checking signature on .changes gpg: Signature made Mon 02 Jul 2012 10:00:38 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2_source.changes. Checking signature on .dsc gpg: Signature made Mon 02 Jul 2012 10:00:33 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading massren_1.0-0extras12.04.1~ppa2.dsc: done. Uploading massren_1.0-0extras12.04.1~ppa2.tar.gz: done. Uploading massren_1.0-0extras12.04.1~ppa2_source.changes: done. Successfully uploaded packages. At the moment I'm not receiving responses from launchpad site, but the upload does not show in the ppa page. Previous attempts gave me response e-mails with different kind of errors: File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 Or more cryptic: GPG verification of /srv/launchpad.net/ppa-queue/incoming/upload-ftp-20120629-163320-001135/~gabrielegreco/massren/ubuntu/massren_1.0-0extras12.04.1~ppa1.dsc failed: Verification failed 3 times: ["(7, 58, u'No data')", "(7, 58, u'No data')", "(7, 58, u'No data')"] Further error processing not possible because of a critical previous error. Any idea how can I solve this problem? I'm new to ubuntu packaging, so I may miss some step... There is an alternative to dput (aka manual upload)?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >