Search Results

Search found 3568 results on 143 pages for 'cost savings'.

Page 126/143 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Dual usage of asp.net mvc and php under same domain

    - by jim
    Hello all, I've got a scenario where we have a customer who has a linux hosted php app (joomla) that they wish to integrate with some back-end asp.net mvc functionality that was created for a 'sister' site. Basically, the mvc site has prices and stock availability methods which (in the sister site) populates dropdown lists and other 'order' style info on the pages. I've been tasked with looking at the integration options to allow the php site to use this info as a 'service'. (as ever, these guys are looking at cost of ownership, maintenence etc, so this is their preferred route) Has anyone done anything similar with success?? I'd imagine (much like the sister site) liberal doses of ajax will be employed in order to populate portions of the page on demand. So this may have a bearing on any suggestions that you may have. Also, the methods that are being called ultimately end up populating the same database, so there are no issues with correlating the ID's across the different platforms. I don't really want to go down any 'iframe' type route if at all possible, tho' reality may dictate this as being an option. I'm possibly (naively) imagining that i could simply invoke the mvc functions directly from the php app with some sort of 'session' variable being passed for authentication. pretty tall order or pretty straightfwd?? cheers jim

    Read the article

  • What are the worst examples of moral failure in the history of software engineering?

    - by Amanda S
    Many computer science curricula include a class or at least a lecture on disasters caused by software bugs, such as the Therac-25 incidents or Ariane 5 Flight 501. Indeed, Wikipedia has a list of software bugs with serious consequences, and a question on StackOverflow addresses some of them too. We study the failures of the past so that we don't repeat them, and I believe that rather than ignoring them or excusing them, it's important to look at these failures squarely and remind ourselves exactly how the mistakes made by people in our profession cost real money and real lives. By studying failures caused by uncaught bugs and bad process, we learn certain lessons about rigorous testing and accountability, and we make sure that our innocent mistakes are caught before they cause major problems. There are kinds of less innocent failure in software engineering, however, and I think it's just as important to study the serious consequences caused by programmers motivated by malice, greed, or just plain amorality. Thus we can learn about the ethical questions that arise in our profession, and how to respond when we are faced with them ourselves. Unfortunately, it's much harder to find lists of these failures--the only one I can come up with is that apocryphal "DOS ain't done 'til Lotus won't run" story. What are the worst examples of moral failure in the history of software engineering?

    Read the article

  • postgres - ERROR: operator does not exist

    - by cino21122
    Again, I have a function that works fine locally, but moving it online yields a big fat error... Taking a cue from a response in which someone had pointed out the number of arguments I was passing wasn't accurate, I double-checked in this situation to be certain that I am passing 5 arguments to the function itself... Query failed: ERROR: operator does not exist: point <@> point HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts. The query is this: BEGIN; SELECT zip_proximity_sum('zc', (SELECT g.lat FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT g.lon FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT m.zip FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1) ,10); The PG function is this: CREATE OR REPLACE FUNCTION zip_proximity_sum(refcursor, numeric, numeric, character, numeric) RETURNS refcursor AS $BODY$ BEGIN OPEN $1 FOR SELECT r.zip, point($2,$3) <@> point(g.lat, g.lon) AS distance FROM geocoded g LEFT JOIN masterfile r ON g.recordid = r.id WHERE (geo_distance( point($2,$3),point(g.lat,g.lon)) < $5) ORDER BY r.zip, distance; RETURN $1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100;

    Read the article

  • SQL Server search filter and order by performance issues

    - by John Leidegren
    We have a table value function that returns a list of people you may access, and we have a relation between a search and a person called search result. What we want to do is that wan't to select all people from the search and present them. The query looks like this SELECT qm.PersonID, p.FullName FROM QueryMembership qm INNER JOIN dbo.GetPersonAccess(1) ON GetPersonAccess.PersonID = qm.PersonID INNER JOIN Person p ON p.PersonID = qm.PersonID WHERE qm.QueryID = 1234 There are only 25 rows with QueryID=1234 but there are almost 5 million rows total in the QueryMembership table. The person table has about 40K people in it. QueryID is not a PK, but it is an index. The query plan tells me 97% of the total cost is spent doing "Key Lookup" witht the seek predicate. QueryMembershipID = Scalar Operator (QueryMembership.QueryMembershipID as QM.QueryMembershipID) Why is the PK in there when it's not used in the query at all? and why is it taking so long time? The number of people total 25, with the index, this should be a table scan for all the QueryMembership rows that have QueryID=1234 and then a JOIN on the 25 people that exists in the table value function. Which btw only have to be evaluated once and completes in less than 1 second.

    Read the article

  • Why might a System.String object not cache its hash code?

    - by Dan Tao
    A glance at the source code for string.GetHashCode using Reflector reveals the following (for mscorlib.dll version 4.0): public override unsafe int GetHashCode() { fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; for (int i = this.Length; i > 0; i -= 4) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; if (i <= 2) { break; } num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; } return (num + (num2 * 0x5d588b65)); } } Now, I realize that the implementation of GetHashCode is not specified and is implementation-dependent, so the question "is GetHashCode implemented in the form of X or Y?" is not really answerable. I'm just curious about a few things: If Reflector has disassembled the DLL correctly and this is the implementation of GetHashCode (in my environment), am I correct in interpreting this code to indicate that a string object, based on this particular implementation, would not cache its hash code? Assuming the answer is yes, why would this be? It seems to me that the memory cost would be minimal (one more 32-bit integer, a drop in the pond compared to the size of the string itself) whereas the savings would be significant, especially in cases where, e.g., strings are used as keys in a hashtable-based collection like a Dictionary<string, [...]>. And since the string class is immutable, it isn't like the value returned by GetHashCode will ever even change. What could I be missing? UPDATE: In response to Andras Zoltan's closing remark: There's also the point made in Tim's answer(+1 there). If he's right, and I think he is, then there's no guarantee that a string is actually immutable after construction, therefore to cache the result would be wrong. Whoa, whoa there! This is an interesting point to make (and yes it's very true), but I really doubt that this was taken into consideration in the implementation of GetHashCode. The statement "therefore to cache the result would be wrong" implies to me that the framework's attitude regarding strings is "Well, they're supposed to be immutable, but really if developers want to get sneaky they're mutable so we'll treat them as such." This is definitely not how the framework views strings. It fully relies on their immutability in so many ways (interning of string literals, assignment of all zero-length strings to string.Empty, etc.) that, basically, if you mutate a string, you're writing code whose behavior is entirely undefined and unpredictable. I guess my point is that for the author(s) of this implementation to worry, "What if this string instance is modified between calls, even though the class as it is publicly exposed is immutable?" would be like for someone planning a casual outdoor BBQ to think to him-/herself, "What if someone brings an atomic bomb to the party?" Look, if someone brings an atom bomb, party's over.

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Microsoft Training in .NET

    - by JD
    Hey guys, A quick question about training. My company are offering to put me through a 5 day course in .NET programming with C#, on the proviso that I work for them for a minimum 12 month period immediately after the training has been completed. IF I don't work for them for 12months I will be expected to pay them back teh cost of the training, minus the amount of months which I have stayed at the company (e.g. stay 6 months and only pay back half). The course will be $3,300 AUD - and will have no qualifications. I have been programming in C# for about 6 months, and in PHP for about 10 years - so I am getting the hang of .NET slowly but surely. Is it worth doing this? Or would I be best advised to learn from reading the MS books from Amazon for $40 and then sitting the 2 exams to get MS certified off my own back? The company are not willing to pay for my certification as well as 'they are not training me to get certified' and hence more employable... Thoughts? How do other companies deal with training? And stop people from taking off after becoming certified?

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Scrum - Responding to traditional RFPs

    - by Todd Charron
    Hi all, I've seen many articles about how to put together Agile RFP's and negotiating agile contracts, but how about if you're responding to a more traditional RFP? Any advice on how to meet the requirements of the RFP while still presenting an agile approach? A lot of these traditional RFP's request specific technical implementations, timelines, and costs, while also requesting exact details about milestones and how the technical solutions will be implemented. While I'm sure in traditional waterfall it's normal to pretend that these things are facts, it seems wrong to commit to something like this if you're an agile organization just to get through the initial screening process. What methods have you used to respond to more traditional RFP's? Here's a sample one grabbed from google, http://www.investtoronto.ca/documents/rfp-web-development.pdf Particularly, "3. A detailed work plan outlining how they expect to achieve the four deliverables within the timeframe outlined. Plan for additional phases of development." and "8. The detailed cost structure, including per diem rates for team members, allocation of hours between team members, expenses and other out of pocket disbursements, and a total upset price."

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • Improving field get and set performance with ASM

    - by ng
    I would like to avoid reflection in an open source project I am developing. Here I have classes like the following. public class PurchaseOrder { @Property private Customer customer; @Property private String name; } I scan for the @Property annotation to determine what I can set and get from the PurchaseOrder reflectively. There are many such classes all using java.lang.reflect.Field.get() and java.lang.reflect.Field.set(). Ideally I would like to generate for each property an invoker like the following. public interface PropertyAccessor<S, V> { public void set(S source, V value); public V get(S source); } Now when I scan the class I can create a static inner class of PurchaseOrder like so. static class customer_Field implements PropertyAccessor<PurchaseOrder, Customer> { public void set(PurchaseOrder order, Customer customer) { order.customer = customer; } public Customer get(PurchaseOrder order) { return order.customer; } } With these I totally avoid the cost of reflection. I can now set and get from my instances with native performance. Can anyone tell me how I would do this. A code example would be great. I have searched the net for a good example but can find nothing like this. The ASM examples are pretty poor also. The key here is that I have an interface that I can pass around. So I can have various implementations, perhaps one with Java Reflection as a default, one with ASM, and maybe one with Javassist? Any help would be greatly appreciated.

    Read the article

  • Singleton Pattern combine with a Decorator

    - by Mike
    Attached is a classic Decorator pattern. My question is how would you modify the below code so that you can wrap zero or one of each topping on to the Pizza Right now I can have a Pepporini - Sausage -- Pepporini -- Pizza class driving the total cost up to $10, charging twice for Pepporini. I don't think I want to use the Chain of Responsibility pattern as order does not matter and not all toppings are used? Thank you namespace PizzaDecorator { public interface IPizza { double CalculateCost(); } public class Pizza: IPizza { public Pizza() { } public double CalculateCost() { return 8.00; } } public abstract class Topping : IPizza { protected IPizza _pizzaItem; public Topping(IPizza pizzaItem) { this._pizzaItem = pizzaItem; } public abstract double CalculateCost(); } public class Pepporini : Topping { public Pepporini(IPizza pizzaItem) : base(pizzaItem) { } public override double CalculateCost() { return this._pizzaItem.CalculateCost() + 0.50; } } public class Sausage : Topping { public Sausage(IPizza pizzaItem) : base(pizzaItem) { } public override double CalculateCost() { return this._pizzaItem.CalculateCost() + 1.00; } } public class Onions : Topping { public Onions(IPizza pizzaItem) : base(pizzaItem) { } public override double CalculateCost() { return this._pizzaItem.CalculateCost() + .25; } } }

    Read the article

  • What Javascript graphing package will let me plot points against a user-selected coordinate system?

    - by wes
    My customer has some specific requirements for a graph to show in our web app. We use HighCharts elsewhere in the app for more traditional graphing, but it doesn't seem to work for this situation. Their requirements: Allow the user to select a background image, set the scale and origin of the coordinate system. We'll graph our points against the user-defined coordinates. Points can be color coded Mouse-over boxes show more detail about the points Support for zooming and panning, scaling the background appropriately Less importantly: Support for drawing vectors off the points Some of this seems basic, but looking around at different graph packages, I was unable to find any with an example of this kind of usage. I've entertained the thought of just hacking it together in canvas myself, but I've never worked with canvas before so I don't think it would be cost effective. The basics of plotting points with a scaled coordinate system against an image background wouldn't be too hard, but the mouse-over details, zooming and panning sound much more daunting to me. More info: Right now we use jQuery, HighCharts, and ExtJS for our app. We tried flot in the past but switched to HighCharts after flot didn't meet our needs.

    Read the article

  • ISO/IEC Website and Charging for C and C++ Standards

    - by Michael Aaron Safyan
    The ISO C Standard (ISO/IEC 9899) and the ISO C++ Standard (ISO/IEC 14882) are not published online; instead, one must purchase the PDF for each of those standards. I am wondering what the rationale is behind this... is it not detrimental to both the C and C++ programming languages that the authoritative specification for these languages is not made freely available and searchable online? Doesn't this encourage the use of possibly inaccurate, non-authoritative sources for information regarding these standards? While I understand that much time and effort has gone into developing the C and C++ standards, I am still somewhat puzzled by the choice to charge for the specification. The OpenGroup Base Specification, for example, is available for free online; they make money buy charging for certification. Does anyone know why the ISO standards committees don't make their revenue in certifying standards compliance, instead of charging for these documents? Also, does anyone know if the ISO standards committee's atrociously looking website is intentionally made to look that way? It's as if they don't want people visiting and buying the spec. One last thing... the C and C++ standards are generally described as "open standards"... while I realize that this means that anyone is permitted to implement the standard, should that definition of "open" be revised? Charging for the standard rather than making it openly available seems contrary to the spirit of openness. P.S. I do have a copy of the ISO/IEC 9899:1999 and ISO/IEC 14882:2003, so please no remarks about being cheap or anything... although if you are tempted to say such things, you might want to consider the high school, undergraduate, and graduate students who might not have all that much extra cash. Also, you might want to consider the fact that the ISO website is really sketchy and they don't even tell you the cost until you proceed to the checkout... doesn't really encourage one to go and get a copy, now does it?

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

  • invasive vs non-invasive ref-counted pointers in C++

    - by anon
    For the past few years, I've generally accepted that if I am going to use ref-counted smart pointers invasive smart pointers is the way to go -- However, I'm starting to like non-invasive smart pointers due to the following: I only use smart pointers (so no Foo* lying around, only Ptr) I'm starting to build custom allocators for each class. (So Foo would overload operator new). Now, if Foo has a list of all Ptr (as it easily can with non-invasive smart pointers). Then, I can avoid memory fragmentation issues since class Foo move the objects around (and just update the corresponding Ptr). The only reason why this Foo moving objects around in non-invasive smart pointers being easier than invasive smart pointers is: In non-invasive smart pointers, there is only one pointer that points to each Foo. In invasive smart pointers, I have no idea how many objects point to each Foo. Now, the only cost of non-invasive smart pointers ... is the double indirection. [Perhaps this screws up the caches]. Does anyone have a good study of expensive this extra layer of indirection is?

    Read the article

  • Generating a field setter and getter with ASM

    - by ng
    I would like to avoid reflection in an open source project I am developing. Here I have classes like the following. public class PurchaseOrder { @Property private Customer customer; @Property private String name; } I scan for the @Property annotation to determine what I can set and get from the PurchaseOrder reflectively. There are many such classes all using java.lang.reflect.Field.get() and java.lang.reflect.Field.set(). Ideally I would like to generate for each property an invoker like the following. public interface PropertyAccessor<S, V> { public void set(S source, V value); public V get(S source); } Now when I scan the class I can create a static inner class of PurchaseOrder like so. static class customer_Field implements PropertyAccessor<PurchaseOrder, Customer> { public void set(PurchaseOrder order, Customer customer) { order.customer = customer; } public Customer get(PurchaseOrder order) { return order.customer; } } With these I totally avoid the cost of reflection. I can now set and get from my instances with native performance. Can anyone tell me how I would do this. A code example would be great. I have searched the net for a good example but can find nothing like this. The ASM examples are pretty poor also. The key here is that I have an interface that I can pass around. So I can have various implementations, perhaps one with Java Reflection as a default, one with ASM, and maybe one with Javassist? Any help would be greatly appreciated.

    Read the article

  • Return unordered list from hierarchical sql data

    - by Milan
    I have table with pageId, parentPageId, title columns. Is there a way to return unordered nested list using asp.net, cte, stored procedure, UDF... anything? Table looks like this: PageID ParentId Title 1 null Home 2 null Products 3 null Services 4 2 Category 1 5 2 Category 2 6 5 Subcategory 1 7 5 SubCategory 2 8 6 Third Level Category 1 ... Result should look like this: Home Products Category 1 SubCategory 1 Third Level Category 1 SubCategory 2 Category 2 Services Ideally, list should contain <a> tags as well, but I hope I can add it myself if I find a way to create <ul> list. EDIT 1: I thought that already there is a solution for this, but it seems that there isn't. I wanted to keep it simple as possible and to escape using ASP.NET menu at any cost, because it uses tables by default. Then I have to use CSS Adapters etc. Even if I decide to go down the "ASP.NET menu" route I was able to find only this approach: http://aspalliance.com/822 which uses DataAdapter and DataSet :( Any more modern or efficient way?

    Read the article

  • architecture and tools for a remote control application?

    - by slothbear
    I'm working on the design of a remote control application. From my iPhone or a web browser, I'll send a few commands. Soon my home computer will perform the commands and send back results. I know there are remote desktop apps, but I want something programmable, something simpler, and something that I wrote. My current direction is to use Amazon Simple Queue Service (SQS) as the message bus. The iPhone places some messages in a queue. My local Java/JRuby program notices the messages on the queue, performs the work and sends back status via a different queue. This will be a very low-volume application. At $1.00 for a million requests (plus a handful of data transfer charges), Amazon SQS looks a lot more affordable than having my own server of any type. And super reliable, that's important for me too. Are there better/standard toolkits or architectures for this kind of remote control? Cost is not a big issue, but I prefer the tons I learn by doing it myself. I'm moderately concerned about security, but doubt it will be a problem. The list of commands recognized will be very short, and only recognized in specific contexts. No "erase hard drive" stuff. update: I'll probably distribute these programs to some other people who want the same function, but who don't have Amazon SQS accounts. For now, they'll use anonymous access to my queues, with random 80-character queue names.

    Read the article

  • Visual studio erroneous errors when building a website?

    - by Curtis White
    Visual Studio 2008 shows a lot of erroneous errors when building a website (not a web project) in the errors list. These errors are usually corrected (removed) when I rebuild the site a couple times but they cost me wasted time. Is there anyway to hide the erroneous errors? Update: I've decided to look into this to see if I could reproduce it. This is the exact behavior I am seeing, using the website model, I type some invalid syntax on a page. The errors list fills up with errors. I correct the error and the errors list does not update. I build the project and the errors list still shows the errors but the build shows as build completed. I build the project a second time and the errors list is cleared. My question is there anyway to make the errors list clear on the first build? I thought it might have something to do with page build vs website build but it seems to make no difference. I am not using any third party dlls on this website.

    Read the article

  • BCrypt says long, similar passwords are equivalent - problem with me, the gem, or the field of crypt

    - by PreciousBodilyFluids
    I've been experimenting with BCrypt, and found the following. If it matters, I'm running ruby 1.9.2dev (2010-04-30 trunk 27557) [i686-linux] require 'bcrypt' # bcrypt-ruby gem, version 2.1.2 @long_string_1 = 'f287ed6548e91475d06688b481ae8612fa060b2d402fdde8f79b7d0181d6a27d8feede46b833ecd9633b10824259ebac13b077efb7c24563fce0000670834215' @long_string_2 = 'f6ebeea9b99bcae4340670360674482773a12fd5ef5e94c7db0a42800813d2587063b70660294736fded10217d80ce7d3b27c568a1237e2ca1fecbf40be5eab8' def salted(string) @long_string_1 + string + @long_string_2 end encrypted_password = BCrypt::Password.create(salted('password'), :cost => 10) puts encrypted_password #=> $2a$10$kNMF/ku6VEAfLFEZKJ.ZC.zcMYUzvOQ6Dzi6ZX1UIVPUh5zr53yEu password = BCrypt::Password.new(encrypted_password) puts password.is_password?(salted('password')) #=> true puts password.is_password?(salted('passward')) #=> true puts password.is_password?(salted('75747373')) #=> true puts password.is_password?(salted('passwor')) #=> false At first I thought that once the passwords got to a certain length, the dissimilarities would just be lost in all the hashing, and only if they were very dissimilar (i.e. a different length) would they be recognized as different. That didn't seem very plausible to me, from what I know of hash functions, but I didn't see a better explanation. Then, I tried shortening each of the long_strings to see where BCrypt would start being able to tell them apart, and I found that if I shortened each of the long strings to 100 characters or so, the final attempt ('passwor') would start returning true as well. So now I don't know what to think. What's the explanation for this?

    Read the article

  • How to get nested chain of objects in Linq and MVC2 application?

    - by Anders Svensson
    I am getting all confused about how to solve this problem in Linq. I have a working solution, but the code to do it is way too complicated and circular I think: I have a timesheet application in MVC 2. I want to query the database that has the following tables (simplified): Project Task TimeSegment The relationships are as follows: A project can have many tasks and a task can have many timesegments. I need to be able to query this in different ways. An example is this: A View is a report that will show a list of projects in a table. Each project's tasks will be listed followed by a Sum of the number of hours worked on that task. The timesegment object is what holds the hours. Here's the View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Report.Master" Inherits="System.Web.Mvc.ViewPage<Tidrapportering.ViewModels.MonthlyReportViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Månadsrapport </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h1> Månadsrapport</h1> <div style="margin-top: 20px;"> <span style="font-weight: bold">Kund: </span> <%: Model.Customer.CustomerName %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Period: </span> <%: Model.StartDate %> - <%: Model.EndDate %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Underlag för: </span> <%: Model.Employee %> </div> <table class="mainTable"> <tr> <th style="width: 25%"> Projekt </th> <th> Specifikation </th> </tr> <% foreach (var project in Model.Projects) { %> <tr> <td style="vertical-align: top; padding-top: 10pt; width: 25%"> <%:project.ProjectName %> </td> <td> <table class="detailsTable"> <tr> <th> Aktivitet </th> <th> Timmar </th> <th> Ex moms </th> </tr> <% foreach (var task in project.CurrentTasks) {%> <tr class="taskrow"> <td class="task" style="width: 40%"> <%: task.TaskName %> </td> <td style="width: 30%"> <%: task.TaskHours.ToString()%> </td> <td style="width: 30%"> <%: String.Format("{0:C}", task.Cost)%> </td> </tr> <% } %> </table> </td> </tr> <% } %> </table> <table class="summaryTable"> <tr> <td style="width: 25%"> </td> <td> <table style="width: 100%"> <tr> <td style="width: 40%"> Totalt: </td> <td style="width: 30%"> <%: Model.TotalHours.ToString() %> </td> <td style="width: 30%"> <%: String.Format("{0:C}", Model.TotalCost)%> </td> </tr> </table> </td> </tr> </table> <div class="price"> <table> <tr> <td>Moms: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.VAT)%> </td> </tr> <tr> <td>Att betala: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.TotalCostAndVAT)%> </td> </tr> </table> </div> </asp:Content> Here's the action method: [HttpPost] public ActionResult MonthlyReports(FormCollection collection) { MonthlyReportViewModel vm = new MonthlyReportViewModel(); vm.StartDate = collection["StartDate"]; vm.EndDate = collection["EndDate"]; int customerId = Int32.Parse(collection["Customers"]); List<TimeSegment> allTimeSegments = GetTimeSegments(customerId, vm.StartDate, vm.EndDate); vm.Projects = GetProjects(allTimeSegments); vm.Employee = "Alla"; vm.Customer = _repository.GetCustomer(customerId); vm.TotalCost = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.Cost); //Corresponds to above foreach vm.TotalHours = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.TaskHours); vm.TotalCostAndVAT = vm.TotalCost * 1.25; vm.VAT = vm.TotalCost * 0.25; return View("MonthlyReport", vm); } And the "helper" methods: public List<TimeSegment> GetTimeSegments(int customerId, string startdate, string enddate) { var timeSegments = _repository.TimeSegments .Where(timeSegment => timeSegment.Customer.CustomerId == customerId) .Where(timeSegment => timeSegment.DateObject.Date >= DateTime.Parse(startdate) && timeSegment.DateObject.Date <= DateTime.Parse(enddate)); return timeSegments.ToList(); } public List<Project> GetProjects(List<TimeSegment> timeSegments) { var projectGroups = from timeSegment in timeSegments group timeSegment by timeSegment.Task into g group g by g.Key.Project into pg select new { Project = pg.Key, Tasks = pg.Key.Tasks }; List<Project> projectList = new List<Project>(); foreach (var group in projectGroups) { Project p = group.Project; foreach (var task in p.Tasks) { task.CurrentTimeSegments = timeSegments.Where(ts => ts.TaskId == task.TaskId).ToList(); p.CurrentTasks.Add(task); } projectList.Add(p); } return projectList; } Again, as I mentioned, this works, but of course is really complex and I get confused myself just looking at it even now that I'm coding it. I sense there must be a much easier way to achieve what I want. Basically you can tell from the View what I want to achieve: I want to get a collection of projects. Each project should have it's associated collection of tasks. And each task should have it's associated collection of timesegments for the specified date period. Note that the projects and tasks selected must also only be the projects and tasks that have the timesegments for this period. I don't want all projects and tasks that have no timesegments within this period. It seems the group by Linq query beginning the GetProjects() method sort of achieves this (if extended to have the conditions for date and so on), but I can't return this and pass it to the view, because it is an anonymous object. I also tried creating a specific type in such a query, but couldn't wrap my head around that either... I hope there is something I'm missing and there is some easier way to achieve this, because I need to be able to do several other different queries as well eventually. I also don't really like the way I solved it with the "CurrentTimeSegments" properties and so on. These properties don't really exist on the model objects in the first place, I added them in partial classes to have somewhere to put the filtered results for each part of the nested object chain... Any ideas?

    Read the article

  • Interchange structured data between Haskell and C

    - by Eonil
    First, I'm a Haskell beginner. I'm planning integrating Haskell into C for realtime game. Haskell does logic, C does rendering. To do this, I have to pass huge complexly structured data (game state) from/to each other for each tick (at least 30 times per second). So the passing data should be lightweight. This state data may laid on sequential space on memory. Both of Haskell and C parts should access every area of the states freely. In best case, the cost of passing data can be copying a pointer to a memory. In worst case, copying whole data with conversion. I'm reading Haskell's FFI(http://www.haskell.org/haskellwiki/FFICookBook#Working_with_structs) The Haskell code look specifying memory layout explicitly. I have a few questions. Can Haskell specify memory layout explicitly? (to be matched exactly with C struct) Is this real memory layout? Or any kind of conversion required? (performance penalty) If Q#2 is true, Any performance penalty when the memory layout specified explicitly? What's the syntax #{alignment foo}? Where can I find the document about this? If I want to pass huge data with best performance, how should I do that? *PS Explicit memory layout feature which I said is just C#'s [StructLayout] attribute. Which is specifying in-memory position and size explicitly. http://www.developerfusion.com/article/84519/mastering-structs-in-c/ I'm not sure Haskell has matching linguistic construct matching with fields of C struct.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >