Search Results

Search found 31491 results on 1260 pages for 'simple talk'.

Page 705/1260 | < Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • True column-mode (block-selection and editing) text editor solution?

    - by tamale
    In windows, I used to use a text editor called crimson editor which featured the best column-mode editing support I have yet to use. When enabled via a simple Alt-C shortcut, selections could be made with the mouse or cursor keys and they would be visual blocks rather than wrapped-lines. These selections could be deleted, moved, copied, pasted, and all of the operations just made sense. You could also just start typing, and you'd get a column of the characters as you're typing. There are multiple ways of getting parts of the these features working separately discussed on this forum thread, but no one has yet to provide a solution that provides this all-encompassing and easy-to-use method. If someone could point me to a gedit plugin where this work is actively being pursued, perhaps I could help with the coding myself. If someone is aware of a text editor that already provides this full functionality, I'd appreciate the info. Running crimson editor through wine and the close-but-not-quite multi-edit plugin for gedit are the temporary solutions I'm 'getting by with' for the time being.

    Read the article

  • Kubuntu never gets to the login screen

    - by searchfgold6789
    I am not sure if Xorg is using the right device... The system in question is an A4-1400 CPU with built-in Radeon Graphics. I added a PCIe Radeon HD 5750, and changed the BIOS so it would use that card automatically. Now it appears that I can see output from the TV plugged into the HDMI port, and boot into recovery mode, but cannot get to a desktop - X doesn't start. It just freezes at the Kubuntu splash screen. I have no idea how to get X to use the proper graphics card. I pasted the Xorg.log to http://paste.ubuntu.com/6261137/. And the Xorg.conf generated by aticonfig --initial is at http://paste.ubuntu.com/6261161/ Also, see lspci and lshw -c display outputs. I can provide more information, but something does tell me X is not using the right graphics card and I've no idea where to begin how to fix it... I faced a similar issue before with another distribution but found no information anywhere on how to make X use a specific graphics card. Or maybe it's not that simple.

    Read the article

  • New functionality in TFS Build Manager &ndash; Managing Triggers and Build Resources

    - by Jakob Ehn
    Yesterday we pushed out a new release (August 2012) of the Community TFS Build Extension, including a new version of the Community TFS Build Manager (1.0.4.6) The two big new features in the Build Manager in this release are: Set Triggers It is now possible to select one or more build definitions and update the triggers for them in one simple operation: You’ll note that we have started collapsing the context menu a bit, the list of commands are getting long! When selecting the Trigger command, you’ll see a dialog where the options should be self-explanatory: The only thing missing here is the Scheduled trigger option, you’ll have to do that using Team Explorer for now.   Manage Build Resources The other feature is that it is now possible to view the build controllers and agents in your current collection and also perform some actions against them. The new functionality is available by select the Build Resources item in the drop down menu: Selecting this, you’ll see a (sort of) hierarchical view of the build controllers and their agents: In this view you can quickly see all the resources and their status. You can also view the build directory of each build agent and the tags that are associated with them. On the action menu, you can enable and disable both agents and controllers (several at a time), and you can also select to remove them. By selecting Manage, you’ll be presented with the standard Manage Controller dialog from Visual Studio where you can set the rest of the properties. Hopefully we’ll be able to implement most of the existing functionality so that we can remove that menu option Our plan is to add more functionality to this view, such as adding new agents/controllers, restarting build service hosts, maybe view diagnostic information such as disk space and error logs.   Hope you’ll find the new functionality useful. Remember to log any bugs and feature requests on the CodePlex site. Happy building!

    Read the article

  • Artificial Intelligence ... how to make an object roam freely/avoid other objects, and model consciousness? [on hold]

    - by help bonafide pigeons
    Say a simple free roam battle scene in which a player runs around freely and engages in battle with other enemies/objects, as shown below: The dragon/dinosaur (or whatever that thing I drew appears to be) will, by some measure, try and avoid attacks so it is modeled to appear to have a conscious desire to avoid pain. My question is ... since this is very complex, many possible strategies for solving this, algorithms, etc., what is the basic idea behind how this would be accomplished in any sort? Like, we can assume the enemy in the picture is not just going to aimlessly hop around and avoid, but freely be modeled to behave as if it were really exploring/fighting. For the best example I can give, witness the behavior of the enemies in Final Fantasy 12 in this video: http://www.youtube.com/watch?v=mO0TkmhiQ6w How do the pros, or how would anyone attempt solve/implement this? PS: I have tried several times to give an image the "illusion" that is has a conciousness, but aside from emulating a real animal's consciousness in complete, I fall short and get choppy moving images that follow predictable patterns, error-prone movements, and the worst imaginable scenario of a battle engagement.

    Read the article

  • Checking validation of entries in a Sudoku game written in Java

    - by Mico0
    I'm building a simple Sudoku game in Java which is based on a matrix (an array[9][9]) and I need to validate my board state according to these rules: all rows have 1-9 digits all columns have 1-9 digits. each 3x3 grid has 1-9 digits. This function should be efficient as possible for example if first case is not valid I believe there's no need to check other cases and so on (correct me if I'm wrong). When I tried doing this I had a conflict. Should I do one large for loop and inside check columns and row (in two other loops) or should I do each test separately and verify every case by it's own? (Please don't suggest too advanced solutions with other class/object helpers.) This is what I thought about: Main validating function (which I want pretty clean): public boolean testBoard() { boolean isBoardValid = false; if (validRows()) { if (validColumns()) { if (validCube()) { isBoardValid = true; } } } return isBoardValid; } Different methods to do the specific test such as: private boolean validRows() { int rowsDigitsCount = 0; for (int num = 1; num <= 9; num++) { boolean foundDigit = false; for (int row = 0; (row < board.length) && (!foundDigit); row++) { for (int col = 0; col < board[row].length; col++) { if (board[row][col] == num) { rowsDigitsCount++; foundDigit = true; break; } } } } return rowsDigitsCount == 9 ? true : false; } I don't know if I should keep doing tests separately because it looks like I'm duplicating my code.

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Create Adjustable Depth of Field Photos with a DSLR

    - by Jason Fitzpatrick
    If you’re fascinating by the Lytro camera–a camera that let’s you change the focus after you’ve taken the photo–this DSLR hack provides a similar post-photo focus processing without the $400 price tag. Photography tinkers at The Chaos Collective came up with a clever way of mimicking the adjustable depth-of-field adjustment effect from the Lytro camera. The secret sauce in their technique is setting the camera to manual focus and capturing a short 2-3 second video clip while they rotate the focus through the entire focal range. From there, they use a simple applet to separate out each frame of the video. Check out the interactive demo below: Anywhere you click in the photo shifts the focus to that point, just like the post processing in the Lytro camera. It’s a different approach to the problem but it yields roughly the same output. Hit up the link below for the full run down on their technique and how you can get started using it with your own video-enabled DLSR. Camera HACK: DOF-Changeable Photos with an SLR [via Hack A Day] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Could a singleton type replace static methods and classes?

    - by MKO
    In C# Static methods has long served a purpose allowing us to call them without instantiating classes. Only in later year have we became more aware of the problems of using static methods and classes. They can’t use interfaces They can’t use inheritance They are hard to test because you can’t make mocks and stubs Is there a better way ? Obviously we need to be able to access library methods without instantiated classes all the time otherwise our code would become pretty cluttered One possibly solution is to use a new keyword for an old concept: the singleton. Singleton’s are global instances of a class, since they are instances we can use them as we would normal classes. In order to make their use nice and practical we'd need some syntactic sugar however Say that the Math class would be of type singleton instead of an actual class. The actual class containing all the default methods for the Math singleton is DefaultMath, which implements the interface IMath. The singleton would be declared as singleton Math : IMath { public Math { this = new DefaultMath(); } } If we wanted to substitute our own class for all math operations we could make a new class MyMath that inherits DefaultMath, or we could just inherit from the interface IMath and create a whole new Class. To make our class the active Math class, you'd do a simple assignment Math = new MyMath(); and voilá! the next time we call Math.Floor it will call your method. Note that for a normal singleton we'd have to write something like Math.Instance.Floor but the compiler eliminates the need for the Instance property Another idea would be to be able to define a singletons as Lazy so they get instantiated only when they're first called, like lazy singleton Math : IMath What do you think, would it have been a better solution that static methods and classes? Is there any problems with this approach?

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • The better way to ask for input?

    - by Skippy
    I am wondering which is the best way to go with java code. I need to create a class with simple prompts for input.. I have tried using both classes and cannot work out the particular benefits for each. Is this because I am still in the early stages of programming or are there situations that will occur as it becomes more complex?? import java.util.Scanner; public class myClass { Scanner stdin = new Scanner(System.in); public String getInput(String prompt) { System.out.print(prompt); return stdin.nextLine(); } ... or import java.io.*; public class myClass { public static void main(String[] args) throws IOException { BufferedReader stdin = new BufferedReader (new InputStreamReader (System.in)); System.out.print("Input something: "); String name = stdin.readLine(); I know these examples are showing different methods within these classes, but thought this might serve well for the discussion. I'm really not sure which site is the best to ask this on.

    Read the article

  • How to encourage domain experts familiar only with C into a C++ opensource project [closed]

    - by paperjam
    Possible Duplicate: How to persuade C fanatics to work on my C++ open source project? I am launching an open-source project into a space where a lot of development is done Linux-kernel-style, i.e. C-language with a low-level mindset. My project is broad and complex and uses aspects of the C++ language and libraries, including the Boost library to best effect for simple, slightly syntactically sweetened, elegant and well structured high level code. We are using C++ templates too to avoid duplication of code and for static polymorphism in code specialisation for performance. Many of the experts in this field are well used to pure C-language projects. How can I persuade them to contribute to my idiomatic C++ based project? I have no objection to C-language subcomponents or the use of a C-like subset for parts of the project so that might be part of the answer. This is a rewritten and retagged rehash of my previous question that was closed. Apologies to those who read and answered for it not being constructive. I hope this new question is viewed as constructive. Please note that this is not a language advocacy question and please keep answers in that spirit.

    Read the article

  • From release to business

    - by geneotech
    So let's say that I've finished programming a simple, indie MMO game similiar to Tibia. I've got a stable server application that is ready to launch, i've got a tested bug-free working client application that is ready to play and the game's official website (ready to host) with payment system and client that is ready to download for free. Let's say none of them break copyright laws, and no matter how impossible it sounds, let's for now say it's true. My game divides accounts into two groups - free and premium. If someone gets premium, he's granted access to all possible game features, that of course, need server authorisation to work properly. Let's say that the "premium account" can be bought on the website for a fixed money/month. Free accounts mean that everyone can actually play, but without paying, you get limited access. This is what the mentioned payment system will be for. Well, I'm completely novice to these business entities issues, so in short: what, in terms of law, are steps from here to the state where my game earns money in a fully legal way ? Also, is there for example, something like verification if game gives the user what it actually offers when paying on its website ? I live in Europe, if it changes something.

    Read the article

  • Work @ Java shop. Tasked to redo intranet. I only know PHP - CTO says use that; Ops Says no - we have JSP, use that - Thoughts?

    - by Mackysback
    So I work as a project manager and was given the opportunity to redo the intranet and Internet site... I'd love to , great addition to my resume. However.... I am not familiar with java nor JSP... I am fairly proficient with PHP and MySQL ... Also my intention was to use a CMS that I have experience with, wordpress, drupal or joomla... The site would be very simple so I was thinking wordpress would be fine for this. I told management that I would need it to run on PHP and they gave me their blessing... Now the disconnect bw it and management ... I spoke with ops and they told me that we have java and JSP for that purpose although the IT guy did say "oh but I do know that php and mysql are gaining popularity" That said ... I do not understand much as far as systems architecture... We do self host and run websphere with IIS and jboss with tomcat. Any advice or suggestions? Thanks ! Is my request that unfeasible?

    Read the article

  • Using the RadTransitionControl in Task-It

    Download Source Code NOTE: To run the source code provided your will need to update to the RC (release candidate) versions of Silverlight 4 and VisualStudio 2010.  In a recent Ask the Experts webinar I showed a simple solution that had similarities to one that I had used in my recent MVVM post, but had a few extra twists on it. I have since been asked if I could post the source code for this demo, so here it is (I am using similar techniques in my Task-It application). The database Before running the code for this app you will need to create the database. First, create a database called MVVMProject in SQL Server, then run MVVMProject.sql in the MVVMProjectDemo/Database directory of your downloaded .zip file. This should give you a Task table with 3 records in it. You will also need to update the connection string in web.config to point to your database ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

  • Thinking about open-sourcing quiz project [closed]

    - by user72727
    I was thinking about starting an open source project. I have a few projects that might work ok as an open project but thought I might dip my toe into the water with a simple quiz project. The idea is you can add questions to a quiz, arrange questions by topic, difficulty or location. Users would hopefully get an interesting quiz, tuned to their ability. At the end they'd get a score and hopefully they might provide either some feedback on the questions or even supply a few questions of their own. I couldn't see a similar project (fame's last words). I have a basic version of the project that gives the user a bunch of questions to answer in 10 minutes. It doesn't currently group the questions into topics, and no feedback is taken. I've also been told the graphical questions don't work on Ipads for some reason. Would this be a suitable project to go open source? I did find various quiz's out there but all seemed rather narrowly focused. I really wanted something that could cover any type of question on any type of subject. I prefer to keep the questions in MySQL but I could see how this might make it more difficult for others to get on board - should I move to data files? How do I proceed? http://www.checkmypages.com/numbers

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • Lubuntu fails plug and play such as scanners; default file location on Gnumeric

    - by Arkay
    I am new to Lubuntu and I have been giving it a good try but confess I am now tempted to go back to Windows. However, I am open to persuasion if I can get a simple answer to two questions. The first is a hardware question issue. It seems from the forum that many users like me cannot use plug and play hardware such as scanners. Are we doing anything wrong or does Lubuntu (mine is 13.04) not support plug and play or scanners? I have read lots of previous answers about scanners but they all seem to involve re-writing lines of instructions to various areas of the system - not something an amateur like me can do with ease. Mine is a Packard Belle Diamond 1200 plus and apparently it should work fine on Lubuntu - but can I even get it recognised, let alone working - No? Secondly is there an easy way in Gnumeric to set the default file location so that I don't have to trawl through my whole tree to locate a file I want to open? Thanks to anybody who can get me stay with Lubuntu with their wisdom.

    Read the article

  • Entry level engineer question regarding memory management

    - by Ealianis
    It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with "I'm focusing on the wrong things." While I respect this coworker I do disagree that this is a "wrong thing" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the latter and rewrote it with the "Dim alertID" so that someone else might find it easier to read. But here was my concern and question: Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub). I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?

    Read the article

  • Generic Pop and Push for List<T>

    - by Bil Simser
    Here's a little snippet I use to extend a generic List class to have similar capabilites to the Stack class. The Stack<T> class is great but it lives in its own world under System.Object. Wouldn't it be nice to have a List<T> that could do the same? Here's the code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static class ExtensionMethods 2: { 3: public static T Pop<T>(this List<T> theList) 4: { 5: var local = theList[theList.Count - 1]; 6: theList.RemoveAt(theList.Count - 1); 7: return local; 8: } 9:   10: public static void Push<T>(this List<T> theList, T item) 11: { 12: theList.Add(item); 13: } 14: } It's a simple extension but I've found it useful, hopefully you will too! Enjoy.

    Read the article

  • An algorithm for finding subset matching criteria?

    - by Macin
    I recently came up with a problem which I would like to share some thoughts about with someone on this forum. This relates to finding a subset. In reality it is more complicated, but I tried to present it here using some simpler concepts. To make things easier, I created this conceptual DB model: Let's assume this is a DB for storing recipes. Recipe can have many instructions steps and many ingredients. Ingredients are stored in a cupboard and we know how much of each ingredient we have. Now, when we create a recipe, we have to define how much of each ingredient we need. When we want to use a recipe, we would just check if required amount is less than available amount for each product and then decide if we can cook a dinner - if amount required for at least one ingredient is less than available amount - recipe cannot be cooked. Simple sql query to get the result. This is straightforward, but I'm wondering, how should I work when the problem is stated the other way round, i.e. how to find recipies which can be cooked only from ingredients that are available? I hope my explanation is clear, but if you need any more clarification, please ask.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Predictive firing (in a tile-based game)

    - by n00bster
    I have a (turn-based) tile-based game, in which you can shoot at entities. You can move around with mouse and keyboard, it's all tile-based, except that bullets move "freely". I've got it all working just fine except that when I move, and the creatures shoot towards the player, they shoot towards the previous tiles.. resulting in ugly looking "miss hits" or lag. I think I need to implement some kind of predictive firing based on the bullet speed and the distance, but I don't quite know how to implement such a thing... Here's a simplified snip of my firing code. class Weapon { public void fire(int x, int y) { ... ... ... Creature owner = getOwner(); Tile targetTile = Zone.getTileAt(x, y); float dist = Vector.distance(owner.getCenterPosition(), targetTile.getCenterPosition()); Bullet b = new Bullet(); b.setPosition(owner.getCenterPosition()); // Take dist into account in the duration to get constant speed regardless of distance float duration = dist / 600f; // Moves the bullet to the centre of the target tile in the given amount of time (in seconds) b.moveTo(targetTile.getCenterPosition(), duration); // This is what I'm after // Vector v = predict the position // b.moveTo(v, duration); Zone.add(bullet); // Now the bullet gets "ticked" and moveTo will be implemented } } Movement of creatures is as simple as setting the position variable. If you need more information, just ask.

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

< Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >