Search Results

Search found 7617 results on 305 pages for 'fields for'.

Page 224/305 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • SharePoint 2010 Hosting :: How to Create an External Content Type SharePoint 2010

    - by mbridge
    In this simple Article trying to show how SharePoint Designer 2010 more the External Content Type to External Database are very easy to create and can be integrated with our SharePoint Portals. You can download SharePoint Designer 2010 here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d88a1505-849b-4587-b854-a7054ee28d66&displaylang=en For this Example I will create a Database in SQL Server and will use SharePoint Designer 2010 to create the connections and use as a mirror from our SharePoint Portal using List and the Database. The first thing we need to do, is connect to SQL Server and create our Database call “Contacts” and add the Table “Contact” with the following fields.  When we create the External Content Type. We  will need to associate the Content Type, in this case i am using the Generic List, then we can create the Connection to the external Data Source. After create the Connection to the Database we can define what Columns we will use and what operations we will add our custom List. For this example i select all Operation they came default. This operation are very important because the Business rules are defined in each operation. After we create the diferent operations we can create the Custom List and define the how will be the Operation and add the Name for our custom List.  If you try to access the New Custom List Call “Custom Contact” you will see we will not have access to the Business Data Connectivity. To Resolve this issue we will need to give Access and permissions to users to the Custom External Content Type BDC connection in the Central administration.  Access to Central Administration Page and select the option “Service Application Tab> Manage Service Application”. There you select the Service “Business Data Connectivity Service” then select “Manage”.  This Option will list all External Content Type, choose the External Content Type we create and select the option “Set Object Permission”, this option will allow to add users to the BDC and manage the permissions to the Custom List.  After the correct permissions are given we can Access to Data on our custom Contact List and start creating new Item and all the other options and operation we define to the same List.  Hope you like this litle Article about connect Database Content to SharePoint Portal using the Externa Content Types and BCS.Thank you.

    Read the article

  • Updating My Online Boggle Solver Using jQuery Templates and WCF

    With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. This model simplifies web page development, but carries with it some costs - namely, the large amount of data exchanged between the client and the server during a postback. On postback the browser sends the values of all of its form fields (including hidden ones, like view state, which may be quite large) to the server; the server then sends back the entire contents of the web page. While there are some scenarios where this amount of information needs to be exchanged, in many cases the user has performed some action that requires far less information to be exchanged. With a little bit of forethought and code we can have the browser and server exchange much less data, which leads to more responsive web pages and an improved user experience. Over the past several weeks I've been writing an article series on accessing server-side data from client script. Rather than rely solely on forms and postbacks, many websites use JavaScript code to asynchronously communicate with the server in response to the page loading or some other user action. The server, upon receiving the JavaScript-initiated request, returns just the data needed by the browser, which the browser then seamlessly integrates into the web page. There are a variety of technologies and techniques that can be employed to provide both the needed server- and client-side functionality. Last week's article, Using WCF Services with jQuery and the ASP.NET Ajax Library, explored using the Windows Communication Foundation, or WCF, to serve data from the web server and showed how to consume such a service using both the ASP.NET Ajax Library and jQuery. In a previous 4Guys article, Creating an Online Boggle Solver, I built an application to find all solutions in a game of Boggle. (Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters.) This article takes the lessons learned in Using WCF Services with jQuery and the ASP.NET Ajax Library and uses them to update the user interface for my online Boggle solver, replacing the existing WebForms-based user interface with a more modern and responsive interface. I also used jQuery Templates, a JavaScript-based templating library that is useful for displaying the results from a server-side service. Read on to learn more! Read More >

    Read the article

  • Which is the most practical way to add functionality to this piece of code?

    - by Adam Arold
    I'm writing an open source library which handles hexagonal grids. It mainly revolves around the HexagonalGrid and the Hexagon class. There is a HexagonalGridBuilder class which builds the grid which contains Hexagon objects. What I'm trying to achieve is to enable the user to add arbitrary data to each Hexagon. The interface looks like this: public interface Hexagon extends Serializable { // ... other methods not important in this context <T> void setSatelliteData(T data); <T> T getSatelliteData(); } So far so good. I'm writing another class however named HexagonalGridCalculator which adds some fancy pieces of computation to the library like calculating the shortest path between two Hexagons or calculating the line of sight around a Hexagon. My problem is that for those I need the user to supply some data for the Hexagon objects like the cost of passing through a Hexagon, or a boolean flag indicating whether the object is transparent/passable or not. My question is how should I implement this? My first idea was to write an interface like this: public interface HexagonData { void setTransparent(boolean isTransparent); void setPassable(boolean isPassable); void setPassageCost(int cost); } and make the user implement it but then it came to my mind that if I add any other functionality later all code will break for those who are using the old interface. So my next idea is to add annotations like @PassageCost, @IsTransparent and @IsPassable which can be added to fields and when I'm doing the computation I can look for the annotations in the satelliteData supplied by the user. This looks flexible enough if I take into account the possibility of later changes but it uses reflection. I have no benchmark of the costs of using annotations so I'm a bit in the dark here. I think that in 90-95% of the cases the efficiency is not important since most users wont't use a grid where this is significant but I can imagine someone trying to create a grid with a size of 5.000.000.000 X 5.000.000.000. So which path should I start walking on? Or are there some better alternatives? Note: These ideas are not implemented yet so I did not pay too much attention to good names.

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • Should custom data elements be stored as XML or database entries?

    - by meteorainer
    There are a ton of questions like this, but they are mostly very generalized, so I'd like to get some views on my specific usage. General: I'm building a new project on my own in Django. It's focus will be on small businesses. I'd like to make it somewhat customizble for my clients so they can add to their customer/invoice/employee/whatever items. My models would reflect boilerplate items that all ModelX might have. For example: first name last name email address ... Then my user's would be able to add fields for whatever data they might like. I'm still in design phase and am building this myself, so I've got some options. Working on... Right now the 'extra items' models have a FK to the generic model (Customer and CustomerDataPoints for example). All values in the extra data points are stored as char and will be coerced/parced into their actual format at view building. In this build the user could theoretically add whatever values they want, group them in sets and generally access them at will from the views relavent to that model. Pros: Low storage overhead, very extensible, searchable Cons: More sql joins My other option is to use some type of markup, or key-value pairing stored directly onto the boilerplate models. This coul essentially just be any low-overhead method weather XML or literal strings. The view and form generated from the stored data would be taking control of validation and reoganizing on updates. Then it would just dump the data back in as a char/blob/whatever. Something like: <datapoint type='char' value='something' required='true' /> <datapoint type='date' value='01/01/2001' required='false' /> ... Pros: No joins needed, Updates for validation and views are decoupled from data Cons: Much higher storage overhead, limited capacity to search on extra content So my question is: If you didn't live in the contraints impose by your company what method would you use? Why? What benefits or pitfalls do you see down the road for me as a small business trying to help other small businesses? Just to clarify, I am not asking about custom UI elements, those I can handle with forms and template snippets. I'm asking primarily about data storage and retreival of non standardized data relative to a boilerplate model.

    Read the article

  • Office 2010 Professional Plus (Top 10 reasons to upgrade)

    - by mbcrump
    Being a huge nerd, I decided that I would go ahead and upgrade to the latest and greatest office. That being, Office 2010 Professional Plus. The biggest concern that I had was loosing all my mail settings from Outlook 2007. Thankfully, it upgrade gracefully and worked like a charm. So lets start this top 10 list. 1) You can upgrade without fear of loosing all your stuff! As you can tell by the screenshot below, you can select what you want to do. I selected to remove all previous versions.    2) Outlook conversations: Just like GMail, you can now group emails by conversations. This is simply awesome and a must have. 3) The ability to ignore conversations. If you are on a email thread that has nothing to do with you. Simply “ignore” the conversation and all emails go into the deleted folder. 4) Quick Steps, do you send an email to the same team member or group constantly. With quick steps, its just one click away. 5) Spell check in the Subject line! 6)  Easier Screenshots, built in just click the button. No more ALT-Printscreen for those that are not aware of the awesome SnagIT 10 that's out. 7) Open in protected view. When you open a document from an email attachment, it lets you know the file may be unsafe. You can click a button to enable editing. This is great for preventing macros.       8) Excel has always had a variety of charts and graphs available to visually depict data and trends. With Excel 2010, though, Microsoft has added a new feature called Sparklines, which allows you to place a mini-graph or trend line in a single cell. The Sparklines are a cool way to quickly and simply add a visual element without having to go through the effort of inserting a graph or chart that overwhelms the worksheet. 9) Contact actions. If you hover over a name in the form or fields on an email, you get a popup giving you several actions you can perform on the person such as adding them to your Outlook contacts, scheduling a meeting, viewing their stored contact information if they are already in your contacts, sending an instant message or even starting a telephone call. 10) Windows 7 Task Bar Context Menu – I love the jumplist. I don’t know how much that I would actually use it but it just rocks.

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • Learn programming backwards, or "so I failed the FizzBuzz test. Now what?"

    - by moraleida
    A Little Background I'm 28 today, and I've never had any formal training in software development, but I do have two higher education degrees equivalent to a B.A in Public Relations and an Executive MBA focused on Project Management. I've worked on those fields for about 6 years total an then, 2,5 years ago I quit/lost my job and decided to shift directions. After a month thinking things through I decided to start freelancing developing small websites in WordPress. I self-learned my way into it and today I can say I run a humble but successful career developing themes and plugins from scratch for my clients - mostly agencies outsourcing some of their dev work for medium/large websites. But sometimes I just feel that not having studied enough math, or not having a formal understanding of things really holds me behind when I have to compete or work with more experienced developers. I'm constantly looking for ways to learn more but I seem to lack the basics. Unfortunately, spending 4 more years in Computer Science is not an option right now, so I'm trying to learn all I can from books and online resources. This method is never going to have NASA employ me but I really don't care right now. My goal is to first pass the bar and to be able to call myself a real programmer. I'm currently spending my spare time studying Java For Programmers (to get a hold on a language everyone says is difficult/demanding), reading excerpts of Code Complete (to get hold of best practices) and also Code: The Hidden Language of Computer Hardware and Software (to grasp the inner workings of computers). TL;DR So, my current situation is this: I'm basically capable of writing any complete system in PHP (with the help of Google and a few books), integrating Ajax, SQL and whatnot, and maybe a little slower than an experienced dev would expect due to all the research involved. But I was stranded yesterday trying to figure out (not Google) a solution for the FizzBuzz test because I didn't have the if($n1 % $n2 == 0) method modulus operator memorized. What would you suggest as a good way to solve this dilemma? What subjects/books should I study that would get me solving problems faster and maybe more "in a programmers way"? EDIT - Seems that there was some confusion about what did I not know to solve FizzBuzz. Maybe I didn't express myself right: I knew the steps needed to solve the problem. What I didn't memorize was the modulus operator. The problem was in transposing basic math to the program, not in knowing basic math. I took the test for fun, after reading about it on Coding Horror. I just decided it was a good base-comparison line between me and formally-trained devs. I just used this as an example of how not having dealt with math in a computer environment before makes me lose time looking up basic things like modulus operators to be able to solve simple problems.

    Read the article

  • Why is Double.Parse so slow?

    - by alexhildyard
    I was recently investigating a bottleneck in one of my applications, which read a CSV file from disk using a TextReader a line at a time, split the tokens, called Double.Parse on each one, then shunted the results into an object list. I was surprised to find it was actually the Double.Parse which seemed to be taking up most of the time.Googling turned up this, which is a little unfocused in places but throws out some excellent ideas:It makes more sense to work with binary format directly, rather than coerce strings into doublesThere is a significant performance improvement in composing doubles directly from the byte stream via long intermediariesString.Split is inefficient on fixed length recordsIn fact it turned out that my problem was more insidious and also more mundane -- a simple case of bad data in, bad data out. Since I had been serialising my Doubles as strings, when I inadvertently divided by zero and produced a "NaN", this of course was serialised as well without error. And because I was reading in using Double.Parse, these "NaN" fields were also (correctly) populating real Double objects without error. The issue is that Double.Parse("NaN") is incredibly slow. In fact, it is of the order of 2000x slower than parsing a valid double. For example, the code below gave me results of 357ms to parse 1000 NaNs, versus 15ms to parse 100,000 valid doubles.            const int invalid_iterations = 1000;            const int valid_iterations = invalid_iterations * 100;            const string invalid_string = "NaN";            const string valid_string = "3.14159265";            DateTime start = DateTime.Now;                        for (int i = 0; i < invalid_iterations; i++)            {                double invalid_double = Double.Parse(invalid_string);            }            Console.WriteLine(String.Format("{0} iterations of invalid double, time taken (ms): {1}",                invalid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            ));            start = DateTime.Now;            for (int i = 0; i < valid_iterations; i++)            {                double valid_double = Double.Parse(valid_string);            }            Console.WriteLine(String.Format("{0} iterations of valid double, time taken (ms): {1}",                valid_iterations,                ((TimeSpan)DateTime.Now.Subtract(start)).Milliseconds            )); I think the moral is to look at the context -- specifically the data -- as well as the code itself. Once I had corrected my data, the performance of Double.Parse was perfectly acceptable, and while clearly it could have been improved, it was now sufficient to my needs.

    Read the article

  • MS in Computer Science after BE in electronics

    - by Abhinav
    I am doing my 3rd year Bachelors in Electronics and Electrical Communication but from the first year I have been interested in Computer Science. But at that time it was just my hobby. But in second year when I joined robotics my love for computer science rose. I with my team came in top three in 2 National Competition (Technical fests of different IITs) where we used Image Processing, Hardware interfacing etc. But then I realised that Computer Science is not just about coding. I took many lectures from online free schools like Udacity, Coursera in subjects related to Artificial Intelligence, Building a Search Engine, Design and Analysis of Algorithm, Programming a Robotic Car, Programming Languages, Machine Learning, Software Engineering as a Service, WebApps Engineering, Compilers, Applied Crypotography etc. I also did some courses in Core and Advanced Java in my second year from training institute. I will also be taking course in Statistics, Databases, Discrete Mathematics from 25th June. Now I realized how vast is the field of Computer Science and how efficient you become on deciding algorithms and classifying problems into different subfields which have been thoroughly researched so you don't always do brute force thing or naive programming. Now this field has become kind of passion for me. Adding to the fact I am also doing my 6 months internship in software field in Texas Instruments where I am working on Automation and Algorithms. I also have some 5-6 good college level projects in Softwares and Robotics. I also like Electronics but only some fields like Operating System(this subject was there in Electronics also), Micro Processor, Digital, Computer Architecture, DSPs etc. I really want to pursue MS in some field of Computer Science. I am giving GRE in October/November. Till now I have good CG of around 9.4/10 and my 1 year in college is still left. Do I have any chance that some good University in US will consider me for MS in field related to computer science or Robotics. Also Can you suggest somethings that I can do during this 1 year to increase my chances for MS or should I apply for EECS(Electrical Engineering and Computer Science) and then I can shift more towards Computer Science as my major option. My main aim is to do Phd after Ms in CS if I am able to do that somehow. I know that I have to put much extra effort to understand things in MS than CS undergraduates but I will do that with my full dedication, also when I communicate with my college CS students or during my internship period I didn't feel that I am missing very much stuff that they know and was very comfortable during my internship with software employees.

    Read the article

  • Editing /.config/dconf/user

    - by user86322
    I am having a problem with Gnome3 (actually, I have it set to fallback mode, or Gnome 2). I have two displays and I need an X screen (I used nvidia-xconfig and nvidia-settings to do this) for each screen. However, every time I either restart X or log in, Gnome seems to be adding the objects values under /gnome/gnome-panel/layouts (ex. first time I set the two separate X screens I had clock, then log out/in, there was clock and clock1 under objects, and then log out/in there were three, clock, clock1, clock2,.......log out/in, ............30 times....clock, clock1, clock2, ......clock 42.....!! The same thing goes for top-panels, menu-bars, etc.) After a while, I found out I could remove all those using the dconf-editor, going to /gnome/gnome-panel/layouts, removing all the repetitions under fields objects-id-list and top-id-list and leaving one value of each object. This is not a solution but at least allow me to keep using Linux without so much problem. However, the problem persists every time I restart X or log in. I now finally learned about "dconf" and where the user profile settings are located (~/.config/dconf/user) and one can use "dconf" to see the keys. In my case, I need to change/remove many keys (all those clocksX, workspace-X, menu-bar-X, etc., where goes from 1 to 42 and still counting) so it's really tedious and boring to be changing one by one using "dconf write". So I found "dconf dump", which actually allow me to dump everything into a .txt file and edit the file really quick (i.e, "dconf dump / >> dump_user.txt"). The problems? Two of them: How do I "load" back "dump_user.txt" I edited into the user profile? (I read somewhere there was a "dconf reload" but reload doesn't exist as a command under "dconf") How do I stop Gnome from keep adding more objects to my desktop environment every time I log in/restart X? NOTE: The problem doesn't occur when I set the displays to use TwinView feature (i.e., the desktop is extended/shared by both displays). However, for my case I need two separate X's. Any help/suggestion would be greatly appreciated. Thanks

    Read the article

  • Masking OpenGL texture by a pattern

    - by user1304844
    Tiled terrain. User wants to build a structure. He presses build and for each tile there is an "allow" or "disallow" tile sprite added to the scene. FPS drops right away, since there are 600+ tiles added to the screen. Since map equals screen, there is no scrolling. I came to an idea to make an allow grid covering the whole map and mask the disallow fields. Approach 1: Create allow and disallow grid textures. Draw a polygon on screen. Pass both textures to the fragment shader. Determine the position inside the polygon and use color from allowTexture if the fragment belongs to the allow field, disallow otherwise Problem: How do I know if I'm on the field that isn't allowed if I cannot pass the matrix representing the map (enum FieldStatus[][] (Allow / Disallow)) to the shader? Therefore, inside the shader I don't know which fragments should be masked. Approach 2: Create allow texture. Create an empty texture buffer same size as the allow texture Memset the pixels of the empty texture to desired color for each pixel that doesn't allow building. Draw a polygon on screen. Pass both textures to the fragment shader. Use texture2 color if alpha 0, texture1 color otherwise. Problem: I'm not sure what is the right way to manipulate pixels on a texture. Do I just make a buffer with width*height*4 size and memcpy the color[] to desired coordinates or is there anything else to it? Would I have to call glTexImage2D after every change to the texture? Another problem with this approach is that it takes a lot more work to get a prettier effect since I'm manipulating the color pixels instead of just masking two textures. varying vec2 TexCoordOut; uniform sampler2D Texture1; uniform sampler2D Texture2; void main(void){ vec4 allowColor = texture2D(Texture1, TexCoordOut); vec4 disallowColor = texture2D(Texture2, TexCoordOut); if(disallowColor.a > 0){ gl_FragColor= disallowColor; }else{ gl_FragColor= allowColor; }} I'm working with OpenGL on Windows. Any other suggestion is welcome.

    Read the article

  • Using jQuery to Make a Field Read Only in SharePoint

    - by Mark Rackley
    Okay… this will be my shortest blog post EVER. Very little rambling.. I promise, and I’m sure this has been blogged more than once, so I apologize for adding to the noise, but like I always say, I blog for myself so I have a global bookmark. So,let’s say you have a field on a SharePoint Form and you want to make it read only. You COULD just open it up in SPD and easily make it read only, but some people are purists and don’t like use SPD or modify the default new/edit/disp forms. Put me in the latter camp, I try to avoid modifying these forms and it seemed like such a simple task that I didn’t want to create a new un-ghosted form.  So.. how do you do it? It’s only one line of jQuery. All you need to do is find the id for your input field and capture the keypress on it so that it cannot be modified (you should probably capture clicks for dropdowns/checkboxes/etc. but I didn’t need to).  Anyway, here’s the entire script: <script src="jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> jQuery(document).ready(function($){ //capture keypress on our read only field and return false $('#idOfInputField').keypress(function() { return false; }); }) </script>   You can find the ID of your input field by viewing the source, this ID stays consistent as long as you don’t muck with the list or form in the wrong way.  Please note, you CANNOT disable the input field as an alternative to capturing the keypress. If you do this and save the form, any data in the disabled fields will be wiped out. There are probably a dozen ways to make a field read-only and if all you are using jQuery for is to make a field read-only, then you might want to question your use of adding the overhead (although it’s really not that much). Hey.. it’s another tool for your tool belt.

    Read the article

  • Several New Hints

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you some of our new experimental hints for NetBeans 7.2. They are called: Unused Use Statement and Immutable Variables. Unused Use Statement This hint is quite simple. It highlights (underlines) your use statements, which are not used. Typical use case is after some refactoring, when you forgot to remove some obsolete use statements. This hint warns you on them and allows you to remove them easily. Just click on the hint bulb in the gutter and select Remove Unused Use Statement. And of course, it works in multiple use statements combined too. Immutable Variables The next one is the hint which checks too many assignments into a variable. And why? That's simple. Mostly you should use just one assignment into one variable. But sometimes you are lazy and you do something like: But it's quite wrong, because what you really do is: And that's exactly the case, when our new hint warns you, that Too many assignments (2) into variable $foo occured. Nothing more. Yes, we know that there are some cases, where could be more assignments and no warning should occur, e.g.: Because maybe one likes longer increment syntax more than the short one. So we tried to handle these cases to don't bother you if it's not a need. Note: We are almost sure that this hint doesn't cover all your use cases, because there are a lot of them. So if you find something strange, write it into our bugzilla so we can handle it better for you. Thanks for your patience! And the last thing is, that you can set the number of allowed assignments in Tools -> Options -> Editor -> Hints -> PHP: Immutable Variables. Note: This hint works just for a common variables, not for fields. We have an enhancement request for that and it should be implemented in next version of NetBeans (probably 7.3). And that's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (product php, component Editor). Thanks.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • Is SOAP Http POST more complicated than I thought

    - by Pete Petersen
    I'm currently writing a bit of code to send some xml data to a web service via HTTP POST. I thought this would be really simple and have written the following example code (C#) Console.WriteLine("Press enter to send data..."); while (Console.ReadLine() != "q") { HttpWebRequest httpWReq = (HttpWebRequest)WebRequest.Create(@"http://localhost:8888/"); Foo fooItem = new Foo { Member1 = "05", Member2 = "74455604", Member3 = "15101051", Member4 = 1, Member5 = "fsf", Member6 = 6.52, }; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = fooItem.ToXml(); byte[] data = encoding.GetBytes(postData); httpWReq.Method = "POST"; httpWReq.ContentType = "application/xml"; httpWReq.ContentLength = data.Length; using (Stream stream = httpWReq.GetRequestStream()) { stream.Write(data, 0, data.Length); } HttpWebResponse response = (HttpWebResponse)httpWReq.GetResponse(); string responseString = new StreamReader(response.GetResponseStream()).ReadToEnd(); Console.WriteLine("Received " + responseString); Console.WriteLine("Press enter to send data..."); } This is all I thought would be necessary, however I have now been given the details for the web service. This included some information which is unfarmiliar to me and I'm unsure whether I need to include it. The information I was sent was <url>http://sometext/soap/rpc</url> <namespace>http://sometext/a.services</namespace> <method>receiveInfo</method> <parm-id>xmldata</parm-id> (Input data) (Actual XML data as string) <parm-id>status</parm-id> (Output data) <userid>user</userid> <password>pass</password> <secure>false</secure> I guess this means I need to include a username and password somehow, but I'm not sure what the namespace or method fields are used for. Could anyone give me a hint? Sorry I've never used webservices before.

    Read the article

  • A starting point for Use Cases and User Stories

    - by Mike Benkovich
    Originally posted on: http://geekswithblogs.net/benko/archive/2013/07/23/a-starting-point-for-use-cases-and-user-stories.aspxSoftware is a challenging business and is rife with opportunities to go wrong. Over the years a number of methodologies have evolved to help make sure that things go right. In an effort to contribute to this I’ve created a list of user stories that I think should be included and sometimes are just assumed. Note this is a work in progress, so I’m looking for your feedback. I’m curious what you would add or change in my list. · As a DBA I am working with a Normalized data model that reflects an agreed upon logical model for the system · As a DBA I am using consistent names for my fields which match the naming standards of my organization · As a DBA my model supports simple CRUD operations against all the entities · As an Application Architect the UI has been validated against the Business requirements and a complete set of user story’s have been created · As an Application Architect the database model has been validated against the UI · As an Application Architect we have a logical business model that describes all the known and/or expected usage of the system during the software’s expected lifecycle · As an Application Architect we have a Deployment diagram that describes how the application components will be deployed · As an Application Architect we have a navigation diagram that describes the typical application flow · As an Application Architect we have identified points of interaction which describes how the UI interacts with the services and the data storage · As an Application Architect we have identified external systems which may now or in the future use the data of this application and have adapted the logical model to include these interactions · As an Application Architect we have identified existing systems and tools that can be extended and/or reused to help this application achieve it’s business goals · As a Project Manager all team members understand the goals of each release and iteration as they are planned · As a Project Manager all team members understand their role and the roles of others · As a Project Manager we have support of the business to do the right thing even if it is not the expedient thing · As a Test/QA Analyst we have created a simulation environment for testing the system which does not use sensitive data and accurately reflects the scenarios of all the data that will be supported by the system · As a Test/QA Analyst we have identified the matrix of supported clients used to access the system including the likely browsers, mobile devices and other interfaces to work with the application · As a Test/QA Analyst we have created exit criteria for each user story that match the requirements of the business story that was used to create them · As a Test/QA Analyst we have access to a Test environment that is isolated from production and staging environments · As a Test/QA Analyst there we have a way to reset the environment so we can rerun tests when a new version of the software becomes available · As a Test/QA Analyst I am able to automate portions of the test process Thoughts? -mike

    Read the article

  • Image first loaded, then it isn't? (XNA)

    - by M0rgenstern
    I am very confused at the Moment. I have the following Class: (Just a part of the class): public class GUIWindow { #region Static Fields //The standard image for windows. public static IngameImage StandardBackgroundImage; #endregion } IngameImage is just one of my own classes, but actually it contains a Texture2D (and some other things). In another class I load a list of GUIButtons by deserializing a XML file. public static GUI Initializazion(string pXMLPath, ContentManager pConMan) { GUI myGUI = pConMan.Load<GUI>(pXMLPath); GUIWindow.StandardBackgroundImage = new IngameImage(pConMan.Load<Texture2D>(myGUI.WindowStandardBackgroundImagePath), Vector2.Zero, 1024, 600, 1, 0, Color.White, 1.0f, true, false, false); System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); myGUI.Windows = pConMan.Load<List<GUIWindow>>(myGUI.GUIFormatXMLPath); System.Console.WriteLine("Windows loaded"); return myGUI; } Here this line: System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Prints "true". To load the GUIWindows I need an "empty" constructor, which looks like that: public GUIWindow() { Name = ""; Buttons = new List<Button>(); ImagePath = ""; System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); //Image = new IngameImage(StandardBackgroundImage); //System.Console.WriteLine( //Image.IsActive = false; SelectedButton = null; IsActive = false; } As you can see, I commented lines out in the constructor. Because: Otherwise this would crash. Here the line System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Doesn't print anything, it just crashes with the following errormessage: Building content threw NullReferenceException: Object reference not set to an object instance. Why does this happen? Before the program wants to load the List, it prints "true". But in the constructor, so in the loading of the list it prints "false". Can anybody please tell me why this happens and how to fix it?

    Read the article

  • Combobox binding with different types

    - by George Evjen
    Binding to comboboxes in Silverlight has been an adventure the past couple of days. In our framework at ArchitectNow we use LookupGroups and LookupValues. In our database we would have a LookupGroup of NBA Teams for example. The group would be called NBATeams, we get the LookupGroupID and then get the values from the LookupValues table. So we would end up with a list of all 30+ teams. Our lookup values entity has a display text(string), value(string), IsActive and some other fields. With our applications we load all this information into the system when the user is logging in or right after they login. So in cache we have a list of groups and values that we can get at whenever we want to. We get this information in our framework simply by creating an observable collection of type LookupValue. To get a list of these values into our property all we have to do is. var NBATeams = AppContext.Current.LookupSerivce.GetLookupValues(“NBATeams”); Our combobox then is bound like this. (We use telerik components in most if not all our projects) <telerik:RadComboBox ItemsSource="{Binding NBATeams}”></telerik:RadComboBox> This should give you a list in your combobox. We also set up another property in our ViewModel that is a just single object of NBATeams  - “SelectedNBATeam” Our selectedItem in our combobox would look like, we would set this to a two way binding since we are sending data back. SelectedItem={Binding SelectedNBATeam, mode=TwoWay}” This is all pretty straight forward and we use this pattern throughout all our applications. What do you do though when you have a combobox in a ItemsControl or ListBox? Here we have a list of NBA Teams that are a string that are being brought back from the database. We cant have the selected item be our LookupValue because the data is a string and its being bound in an ItemsControl. In the example above we would just have the combobox in a form. Here though we have it in a ItemsControl, where there is no selected item from the initial ItemsSource. In order to get the selected item to be displayed in the combobox you have to convert the LookupValue to a string. Then instead of using SelectedItem in the combobox use SelectedValue. To convert the LookupValue we do this. Create an observable collection of strings public ObservableCollection<string> NBATeams { get; set;} Then convert your lookups to strings var NBATeams = new ObservableCollection<string>(AppContext.Current.LookupService.GetLookupValues(“NBATeams”).Select(x => x.DisplayText)); This will give us a list of strings and our selected value should be bound to the NBATeams property in our ItemsSource in our ItemsControl. SelectedValue={Binding NBATeam, mode=TwoWay}”

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • SQL Saturday 194 - Exeter

    - by Dave Ballantyne
    Many kudos goes to Jonathan and Annette Allen and the others on the team for confirming SQL Saturday 194 in Exeter on the 8th and 9th of March.  The event home page is here http://www.sqlsaturday.com/194/eventhome.aspx and I delighted that myself and Dave Morrison will be presenting a full day pre-con on the 8th on favourite subjects “TSQL and Internals”. Here is the full abstract : TSQL and internals - When faced with performance issues there are many lines of attack. Tuning the engine itself can get you so far, however for maximum effect you need to understand how the engine and how it translates SQL statements into performable actions. This is not a simple task, it is a massive task to deal with a multi-table join and the number of permutations can be immense. To back up this knowledge, we can create better performing TSQL and understand the impact that is has upon the engine and recognize the pitfalls and gotcha’s that exist in SQLServer. Ultimately, there is no ‘best way’ to perform a single task only many variations of ‘it depends’ , but now we can pick the most appropriate option for the required dataload. Over the years, there have been many myths and misconceptions have grown around the product, some have basis in older versions and some are just wrong. Continuing to build on the knowledge given so far these issue will be explored and broken down and proved or disproved. Finally we will look to the future and explore SQL Server 2012 and the new functionality that that brings and some of the common uses that we will be able to address. After completion of this days pre-con, attendees will have a more complete knowledge of execution plans, and how they relate to the physical and logical actions that SQLServer will be executing on their behalf. The attendees will also have a more rounded and fuller knowledge of TSQL and the implications of incorrectly defining a query. Dave is a fountain of knowledge on execution plans and optimizer internals and ,though i may flatter myself, I’m no shrinking violet when it comes to TSQL and such matters.  I hope that if you cant join us, then there are other pre-cons available from other experts in their fields that may ‘float you boat’ too.  The pre-con page is http://sqlsouthwest.co.uk/SQLSaturday_precon.htm Also, excitingly, this pre-con day is sponsored by Fusion-IO which is a great boon for the day. If you want a more of this then i am offering a 2 day TSQL course starting on the 19th of March. More details on this are available here

    Read the article

  • To make or not to make...python-nautilus a dependency?

    - by George Edison
    That is the question! Okay, all silliness aside, I really am forced to make a difficult decision here. My application is written in C++ and allows other scripts to invoke methods via XML-RPC. One of these scripts is a Nautilus extension written in Python. The extension is packaged with the rest of the application and copied to the appropriate place when installed (/usr/share/nautilus-python/extensions). Now the problem is that the Nautilus extension requires the python-nautilus package to be installed to be operational. So therefore I have three options: Make the python-nautilus package a dependency. This option will ensure that anyone who installs my package will be able to use the Nautilus extension. However, this option will not be attractive to XFCE or KDE users - a ton of python-nautilus's dependencies will be installed on their machines and take up a lot of space - even if they never use Nautilus. Put the python-nautilus package in the suggests: or recommends: field. This option provides the end-user with a way to avoid installing the python-nautilus package (by providing the --no-install-suggests or --no-install-recommends argument to apt-get). However, this won't work when the user installs the package in the Software Center. (I always get mixed up as to which of those two fields are installed by default.) Prompt the user when the application is installed or first launched. This option is more complicated than the others but offers the best compromise between making it easy for the user to install python-nautilus (without going into a technical explanation) and not installing it when the user doesn't need it (or want it). I guess the best way to implement this is a simple prompt that invokes apt-get if the user would like the package installed. Don't install the package at all. This option ensures that nobody has python-nautilus installed on their machine unless they want it. However, this also means that my Nautilus extension will simply not run on the end-user's machine unless they manually install the package. Which of these options seems the best choice? Have I missed any pros and cons for each of the options?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >