Search Results

Search found 4772 results on 191 pages for 'complex'.

Page 162/191 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • CSS - Positioning images next to text

    - by jpjoki
    Hi, I'm doing a site in which images need to presented next to textual content - a sort of pseudo two-column layout, as the images and text come from a single html source. I've found quite a simple way to do this by putting the images as their own paragraphs and floating them. Would there still be a more simpler way (in regards to html) to do this without these extra paragraphs and by only attributing extra css to images? If the floated image is in the same paragraph than the text, then paragraphs with and without images would be different in width. EDIT: Basically, I'm looking for as simple HTML markup as possible to position images like this. The CSS can be complex ;) CSS: p { width: 500px; } p.image { float: right; width: 900px; } Current HTML: <p class="image"><img src="image.jpg" /></p> <p>Some text here.</p> Is the above possible with this HTML? <p><img src="image.jpg" /></p>

    Read the article

  • Refactoring. Your way to reduce code complexity of big class with big methods

    - by Andrew Florko
    I have a legacy class that is rahter complex to maintain: class OldClass { method1(arg1, arg2) { ... 200 lines of code ... } method2(arg1) { ... 200 lines of code ... } ... method20(arg1, arg2, arg3) { ... 200 lines of code ... } } methods are huge, unstructured and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, whith one pulic method and several helpers. What will you suggest? Several ideas come to my mind: Add several private helper methods to each method and join them in #region (straight-forward refactoring) Use Command pattern (one command class per OldClass method in a separate file). Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands). ? Thank you in advance!

    Read the article

  • Typical Search, Result and Detail Workflow Staying Within an Android Tab

    - by Justin
    So, I've been banging my head looking for a good solution for a few days and am stuck. I have a search screen (Activity) in a tab, and after the user enters a value and clicks "search" I would like the results to come back in that same tab, and then if an item from the results is selected, to show more detailed results, in that same tab. I have it all working now in separate activities, and even the first step working in a tab, but as soon as I call the activity to process he search results... i.e. startActivity(i); for the results Activity, the results displayed are not in the tab! I am having a very difficult time getting this flow to work all under a tab. Any thoughts on how to make this happen? I keep hearing that Android views should be used instead of activities, but am I then to assume that all the logic I have right now for 3 activity needs to go inside 1 activity and then I need to handle setting the content and state for each of these cases? Plus, won't the history stack not work as pressing the back button will take the user out of the application, instead of taking them from say the search result to the search screen, or the details to the search results, etc. This seems like a mess. Can anyone show a more complex example of tabs or how one might have a simple search, result and detail workflow staying in a tab? I have seen a few questions on this concept of keeping activities "within a tab", but no good resolution. Please help.

    Read the article

  • Unit testing a 'legacy' WPF Application

    - by sc_ray
    The product I have been working on has been in development for the past six years. It started as a generic data entry portal into an insanely complex part WPF/part legacy application. The system has been developed for all these years without a single Unit test in its fold. Now, the point has been raised for a comprehensive unit testing framework. I have been recruited recently to work on this product and have been tasked to get the 'Testing' in order. Since the team that worked on the product for the last six years adopted 'Agile', the project lacks any documentation of the business rules or any design documents. I have been trying to write unit tests for some of the modules. But I am not sure what to Mock, how to setup my Test fixture and eventually what to Test for, since a casual glance of the methods does not reveal its intentions. Also, it has come to my attention that the code was not developed with a particular methodology in mind. Given the situation, I was wondering if the good people of Stackoverflow could provide me with some advise on how to salvage this situation. I have heard about the book 'Working with Legacy Code' that has something to say about this general situation but I was thinking about getting some pointers from individuals who have encountered similar situations within the technology stack(C#,VB,C++,.NET 3.5,WCF,SQL Server 2005).

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • Are workflows good for web service business logic?

    - by JL
    I have a series of complex web services that are getting used in my SOA application. I am generally happy with the overall design of the application, but as the complexity grows, I was wondering if Windows Workflow might be the way to go. My motivations for this are that you can get a graphic representation of the applications functionality, so it would be easier to maintain the code by its business function, rather than what I have now ( a standard 3 tier class library structure). My concerns are: I would be inducing an abstraction in my code, and I don't want to spend time having to deal with possible WF quirks or bugs. I've never worked with WF, is it a solid technology? I don't want to hit any WF limitations that prevent me from developing my solution. Is a WF even the right solution for the task? Simply put I am considering writing my next web service in this app to call a WF, and in this work flow manage the tasks the web service needs to carry out. I think it will be much neater and easier to maintain than a regular c# class library (maintainable by namespaces, classes ). Do you think this is the right thing to do? I'm hoping for positive feedback on WF (.net 4), but brutal honestly at the end of the day would help more. Thanks

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • Dynamic Data Extract Tools

    - by Kevin McGovern
    I've been searching around for a few weeks now for a tool that either is fully built or a direction of something I could build for dynamically extracting data via a web interface. Basically, what I'm looking for is a way to give users a list of all available data objects from our database and then let them pick ones from the list they'd like to view and set parameters then export the results to an excel file. Right now we're doing it purely with SQL statements but we have hundreds of objects so as you might imagine, those statements are really complex and prone to errors. It would be great if there was a tool available to do this or if someone had an idea of an easy way to organize this. Any help would be greatly appreciated. We've looked at BI tools like QlikView and Tableau but that is probably overkill for what we're trying to do. The open-source BI tools we've looked at seemed really primitive in their functionality. The other thing we looked at was MSAS (our DB is SQL Server) but I'd prefer something that was more database-agnostic and lived on a web server instead of on the database.

    Read the article

  • Modeling Buyers & Sellers in a Rails Ecommerce App

    - by MikeH
    I'm building a Rails app that has Etsy.com style functionality. In other words, it's like a mall. There are many buyers and many sellers. I'm torn about how to model the sellers. Key facts: There won't be many sellers. Perhaps less than 20 sellers in total. There will be many buyers. Hopefully many thousands :) I already have a standard user model in place with account creation and roles. I've created a 'role' of 'seller', which the admin will manually apply to the proper users. Since we'll have very few sellers, this is not an issue. I'm considering two approaches: (1) Create a 'store' model, which will contain all the relevant store information. Products would :belong_to :store, rather than belonging to the seller. The relationship between the user and store models would be: user :has_one store. My main problem with this is that I've always found has_one associations to be a little funky, and I usually try to avoid them. The app is fairly complex, and I'm worried about running into a cascade of problems connected to the has_one association as I get further along into development. (2) Simply include the relevant 'store' information as part of the user model. But in this case, the store-related db columns would only apply to a very small percentage of users since very few users will also be sellers. I'm not sure if this is a valid concern or not. It's very possible that I'm thinking about this incorrectly. I appreciate any thoughts. Thanks.

    Read the article

  • Accessing both stored procedure output parameters AND the result set in Entity Framework?

    - by MS.
    Is there any way of accessing both a result set and output parameters from a stored procedure added in as a function import in an Entity Framework model? I am finding that if I set the return type to "None" such that the designer generated code ends up calling base.ExecuteFunction(...) that I can access the output parameters fine after calling the function (but of course not the result set). Conversely if I set the return type in the designer to a collection of complex types then the designer generated code calls base.ExecuteFunction<T>(...) and the result set is returned as ObjectResult<T> but then the value property for the ObjectParameter instances is NULL rather than containing the proper value that I can see being passed back in Profiler. I speculate the second method is perhaps calling a DataReader and not closing it. Is this a known issue? Any work arounds or alternative approaches? Edit My code currently looks like public IEnumerable<FooBar> GetFooBars( int? param1, string param2, DateTime from, DateTime to, out DateTime? createdDate, out DateTime? deletedDate) { var createdDateParam = new ObjectParameter("CreatedDate", typeof(DateTime)); var deletedDateParam = new ObjectParameter("DeletedDate", typeof(DateTime)); var fooBars = MyContext.GetFooBars(param1, param2, from, to, createdDateParam, deletedDateParam); createdDate = (DateTime?)(createdDateParam.Value == DBNull.Value ? null : createdDateParam.Value); deletedDate = (DateTime?)(deletedDateParam.Value == DBNull.Value ? null : deletedDateParam.Value); return fooBars; }

    Read the article

  • How to ignore the validation of Unknown tags ?

    - by infant programmer
    One more challenge to the XSD capability,I have been sending XML files by my clients, which will be having 0 or more undefined or [call] unexpected tags (May appear in hierarchy). Well they are redundant tags for me .. so I have got to ignore their presence, but along with them there are some set of tags which are required to be validated. This is a sample XML: <root> <undefined_1>one</undefined_1> <undefined_2>two</undefined_2> <node>to_be_validated</node> <undefined_3>two</undefined_3> <undefined_4>two</undefined_4> </root> And the XSD I tried with: <xs:element name="root" type="root"></xs:element> <xs:complexType name="root"> <xs:sequence> <xs:any maxOccurs="2" minOccurs="0"/> <xs:element name="node" type="xs:string"/> <xs:any maxOccurs="2" minOccurs="0"/> </xs:sequence> </xs:complexType XSD doesn't allow this, due to certain reasons. The above mentioned example is just a sample. The practical XML comes with the complex hierarchy of XML tags .. Kindly let me know if you can get a hack of it. By the way, The alternative solution is to insert XSL-transformation, before validation process. Well, I am avoiding it because I need to change the .Net code which triggers validation process, which is supported at the least by my company.

    Read the article

  • C++ AMP, for loops to parallel_for_each loop

    - by user1430335
    I'm converting an algorithm to make use of the massive acceleration that C++ AMP provides. The stage I'm at is putting the for loops into the known parallel_for_each loop. Normally this should be a straightforward task to do but it appears more complex then I first thought. It's a nested loop which I increment using steps of 4 per iterations: for(int j = 0; j < height; j += 4, data += width * 4 * 4) { for(int i = 0; i < width; i += 4) { The trouble I'm having is the use of the index. I can't seem to find a way to properly fit this into the parallel_for_each loop. Using an index of rank 2 is the way to go but manipulating it via branching will do harm to the performance gain. I found a similar post: Controlling the index variables in C++ AMP. It also deals about index manipulation but the increment aspect doesn't cover my issue. With kind regards, Forcecast

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

  • Do I need a spatial index in my database?

    - by Sanoj
    I am designing an application that needs to save geometric shapes in a database. I haven't choosen the database management system yet. In my application, all database queries will have an bounding box as input, and as output I want all shapes within that database. I know that databases with a spatial index is used for this kind of application. But in my application there will not be any queries of type "give me objects nearby x/y" or other more complex queries that are useful in a GIS application. I am planning of having a database without a spatial index and have queries looking like: SELECT * FROM shapes WHERE x < max_x AND x > min_x AND y < max_y AND y > min_y And have an index on the columns x (double) and y (double). As long I can see, I don't really need a database with an spatial index, howsoever my application is close to that kind of applications. And even if I would like to have nearby queries, then I could create a big enough bounding box around that point. Or will this lead to poor performance? Do I really need a spatial database? And when is a spatial index needed?

    Read the article

  • HTML columns or rows for form layout?

    - by Valera
    I'm building a bunch of forms that have labels and corresponding fields (input element or more complex elements). Labels go on the left, fields go on the right. Labels in a given form should all be a specific width so that the fields all line up vertically. There are two ways (maybe more?) of achieving this: Rows: Float each label and each field left. Put each label and field in a field-row div/container. Set label width to some specific number. With this approach labels on different forms will have different widths, because they'll depend on the width of the text in the longest label. Columns: Put all labels in one div/container that's floated left, put all fields in another floated left container with padding-left set. This way the labels and even the label container don't need to have their widths set, because the column layout and the padding-left will uniformly take care of vertically lining up all the fields. So approach #2 seems to be easier to implement (because the widths don't need to be set all the time), but I think it's also less object oriented, because a label and a field that goes with that label are not grouped together, as they are in approach #1. Also, if building forms dynamically, approach #2 doesn't work as well with functions like addRow(label, field), since it would have to know about the label and the field containers, instead of just creating/adding one field-row element. Which approach do you think is better? Is there another, better approach than these two?

    Read the article

  • How can I get the search parameters from jqgrid in the server side?

    - by Jack
    I've been visiting this forum a lot without registering since several months ago, and I really like it. So, thanks in advance to all the members. Now I'd like to make my first question. I've been using Jqgrid for a few time, and I've managed to have it display the rows and buttons, but now I need to do a search, a complex one, and I thought that "automatically" jqgrid would send the parameters to the server, I mean: sField, searchField, sOper, searchOper, sValue, searchString, sFilter and/or filters I'm not sure at all which ones it has to send, and I thought it would be just the same as it sends 'page', 'rows' and 'sord'. But I'm missing something, because, for example, I can get 'page', 'rows' and 'sord' using: $limit = $this->getRequest()->getParam('rows', 10); but I get nothing by using: $params = $_REQUEST['filters'] or $params = $this->getRequest()->getParam('sFilter'); I'm using PHP, Zend and json. I didn't post any code because my doubt is kind of generic, but I will do it if it was needed. I've searched a lot, and read the documentation, but I just don't see it. I will appreciate your help, thanks!

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • Peoplesoft queries - performance

    - by DBa
    Hi, I'm facing a problem with PeopleSoft queries (using Oracle backend database): when a rather complex query involving multiple records is set off by a user, PS does an enforced join of security records, thus producing SQL like this: select .... from ps_job a, PS_EMPL_SRCQRY a1, ps_table2 b, ps_sec_rcd2 b1, ps_table3 c, ps_sec_rcd3 c1 where (...security joins a-a1, b-b1, c-c1...) and (...joins of a, b and c...) and a.setid_dept = 'XYZ'; (let's assume the last condition has a high selectivity and there is an index on the column) Obviously, due to the arrangement of the conditions, first a huge join is created, written to the temp segment, and when the last condition is finally applied, only a small subset is selected. A query formulated in this way is very likely to hit the preset timeout of the APPSRV, and even of the QRYSRV. When writing the query manually, I would rather move the most selective condition to the start, thus limiting the amount of the data being handled, to a considerable level. Any ideas on how to make PS behave like this? Actually, already rewriting "Oracle-styled" SQL to ANSI SQL seems to accelerate the queries - however, PS writes Oracle-style queries... Thanks in advance DBa

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • Python: How to transfer varrying length arrays over a network connection

    - by Devin
    Hi, I need to transfer an array of varying length in which each element is a tuple of two integers. As an example: path = [(1,1),(1,2)] path = [(1,1),(1,2),(2,2)] I am trying to use pack and unpack, however, since the array is of varying length I don't know how to create a format such that both know the format. I was trying to turn it into a single string with delimiters, such as: msg = 1&1~1&2~ sendMsg = pack("s",msg) or sendMsg = pack("s",str(msg)) on the receiving side: path = unpack("s",msg) but that just prints 1 in this case. I was also trying to send 4 integers as well, which send and receive fine, so long as I don't include the extra string representing the path. sendMsg = pack("hhhh",p.direction[0],p.direction[1],p.id,p.health) on the receive side: x,y,id,health = unpack("hhhh",msg) The first was for illustration as I was trying to send the format "hhhhs", but either way the path doesn't come through properly. Thank-you for your help. I will also be looking at sending a 2D array of ints, but I can't seem to figure out how to send these more 'complex' structures across the network. Thank-you for your help.

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • Is there a free tool which can help visualize the logic of a stored procedure in SQL Server 2008 R2?

    - by Hamish Grubijan
    I would like to be able to plot a call graph of a stored procedure. I am not interested in every detail, and I am not concerned with dynamic SQL (although it would be cool to detect it and skip it maybe or mark it as such.) I would like the tool to generate a tree for me, given the server name, db name, stored proc name, a "call tree", which includes: Parent stored procedure. Every other stored procedure that is being called as a child of the caller. Every table that is being modified (updated or deleted from) as a child of the stored proc which does it. Hopefully it is clear what I am after; if not - please do ask. If there is not a tool that can do this, then I would like to try to write one myself. Python 2.6 is my language of choice, and I would like to use standard libraries as much as possible. Any suggestions? EDIT: For the purposes of bounty Warning: SQL syntax is COMPLEX. I need something that can parse all kinds of SQL 2008, even if it looks stupid. No corner cases barred :) EDIT2: I would be OK if all I am missing is graphics.

    Read the article

  • Multi-user job/task tracking/queue software

    - by Bmsgaffer86
    Background: I test and repair electronic products with a team. There are many 'jobs' going through my lab at any point in time. It is getting difficult to track whats coming in and going out because I don't do every test or repair myself. Target: User can enter a job when they drop it off in my lab, and it will appear on the master list or queue. Needs to have priorities and due dates that can be adjusted by users. Ideally this would be web based and open-source, but I am flexible. Dream: A large monitor displaying a list of jobs in the master queue with details. This is very optional though, and would be in the best case scenario. I have done MANY hours of Google-ing and I am not sure if I have been using the right terminology, but I have not found anything that is simple enough to stand alone yet complex enough to be multi-user based. I am mildly proficient in VB, and have the drive to piece anything together that I have to. I am open to ANY help or suggestions.

    Read the article

  • Best ways to format LINQ queries.

    - by Aren B
    Before you ignore / vote-to-close this question, I consider this a valid question to ask because code clarity is an important topic of discussion, it's essential to writing maintainable code and I would greatly appreciate answers from those who have come across this before. I've recently run into this problem, LINQ queries can get pretty nasty real quick because of the large amount of nesting. Below are some examples of the differences in formatting that I've come up with (for the same relatively non-complex query) No Formatting var allInventory = system.InventorySources.Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }).GroupBy(i => i.Region, i => i.Inventory); Elevated Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory); Block Formatting var allInventory = system.InventorySources .Select( src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory ); List Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy(i => i.Region, i => i.Inventory); I want to come up with a standard for linq formatting so that it maximizes readability & understanding and looks clean and professional. So far I can't decide so I turn the question to the professionals here.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >