Search Results

Search found 908 results on 37 pages for 'cascading deletes'.

Page 23/37 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Codeigniter: How to handle direct link to controller/function/param?

    - by rebjr
    I have a simple controller function which deletes a DB entry (it uses a model function to do so). I have a link to this in one of my views (e.g. http://www.example.com/item/delete/3) and I’m using jQuery to display a confirm dialog to make sure the user really wants to delete it. All fine. However if you just enter that URL in your browser the item is deleted without warning. Is there a way to handle this either in the way I code the controller function or in the model?

    Read the article

  • Returning objects with dynamic memory

    - by Caulibrot
    I'm having trouble figuring out a way to return an object (declared locally within the function) which has dynamic memory attached to it. The problem is the destructor, which runs and deletes the dynamic memory when the object goes out of scope, i.e. when I return it and want to use the data in the memory that has been deleted! I'm doing this for an overloaded addition operator. I'm trying to do something like: MyObj operator+( const MyObj& x, const MyObj& y ) { MyObj z; // code to add x and y and store in dynamic memory of z return z; } My destructor is simply: MyObj::~MyObj() { delete [] ptr; } Any suggestions would be much appreciated!

    Read the article

  • Cutting Row with Data and moving to different sheet VBA

    - by user3709645
    I'm trying to cut a row that has the specified cell blank and then paste it into another sheet in the same workbook. My coding works fine to delete the row but everything I've tried to cut and paste keeps giving me errors. Here's the working code that deletes the rows: Sub Remove() 'Remove No Denovo &/or No Peak Seq Dim n As Long Dim nLastRow As Long Dim nFirstRow As Long Dim lastRow As Integer ActiveSheet.UsedRange Set r = ActiveSheet.UsedRange nLastRow = r.rows.Count + r.Row - 1 nFirstRow = r.Row For n = nLastRow To nFirstRow Step -1 If Cells(n, "G") = "" Then Cells(n, "G").EntireRow.Delete Next n End Sub Thanks for any help!

    Read the article

  • Using sed to delete string

    - by wired
    I was hacked and have hundreds of .js files with this line of code that I'm trying to get rid of: ;document.write('<iframe src="http://sitecorporatemanagement.ru/pretzellogmeins.cgi?8" scrolling="auto" frameborder="no" align="center" height="3" width="3"></iframe>'); It is the last line of the file, but I think the file contains windows line endings, because when ever I do this: sed -i '/sitecorporatemanagement.ru/d' * it deletes the full content of the file. Can you help me get this to work? I just need that full string deleted. Thank you for all the help you can give.

    Read the article

  • Deleting the first occurrence of a target in aList [closed]

    - by Bandz Jooz
    /** Replaces each occurrence of oldItem in aList with newItem */ public static void replace(List<Student> aList, Student oldItemStudent newItem) { int index = aList.indexOf(oldItem); while(index != -1){ aList.set(index, newItem); index = aList.indexOf(oldItem); } /** Deletes the first occurrence of target in aList */ public static void delete(List<Student> aList, Student target){ Object o = //stuck here, dont know how to set up boolean stuff } } I figured out how to do the first method by looking up Java documentation, however I can't figure out how to finish my code for the second method even though I looked up the documentation which states: boolean remove(Object o) Removes the first occurrence of the specified element from this list, if it is present.

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery – Part 1

    - by Bob Zurek
    Information Discovery, a core capability of Oracle Endeca Information Discovery, enables business users to rapidly search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. One of the key capabilities, among many, that differentiate our solution from others in the Information Discovery market is our deep support for search across this growing amount of varied big data. Our method and approach is very different than classic simple keyword search that is found in may information discovery solutions. In this first part of a series on the topic of search, I will walk you through many of the key capabilities that go beyond the simple search box that you might experience in products where search was clearly an afterthought or attempt to catch up to our core capabilities in this area. Lets explore. The core data management solution of Oracle Endeca Information Discovery is the Endeca Server, a hybrid search-analytical database that his highly scalable and column-oriented in nature. We will talk in more technical detail about the capabilities of the Endeca Server in future blog posts as this post is intended to give you a feel for the deep search capabilities that are an integral part of the Endeca Server. The Endeca Server provides best-of-breed search features aw well as a new class of features that are the first to be designed around the requirement to bridge structured, semi-structured and unstructured big data. Some of the key features of search include type a heads, automatic alphanumeric spell corrections, positional search, Booleans, wildcarding, natural language, and category search and query classification dialogs. This is just a subset of the advanced search capabilities found in Oracle Endeca Information Discovery. Search is an important feature that makes it possible for business users to explore on the diverse data sets the Endeca Server can hold at any one time. The search capabilities in the Endeca server differ from other Information Discovery products with simple “search boxes” in the following ways: The Endeca Server Supports Exploratory Search.  Enterprise data frequently requires the user to explore content through an ad hoc dialog, with guidance that helps them succeed. This has implications for how to design search features. Traditional search doesn’t assume a dialog, and so it uses relevance ranking to get its best guess to the top of the results list. It calculates many relevance factors for each query, like word frequency, distance, and meaning, and then reduces those many factors to a single score based on a proprietary “black box” formula. But how can a business users, searching, act on the information that the document is say only 38.1% relevant? In contrast, exploratory search gives users the opportunity to clarify what is relevant to them through refinements and summaries. This approach has received consumer endorsement through popular ecommerce sites where guided navigation across a broad range of products has helped consumers better discover choices that meet their, sometimes undetermined requirements. This same model exists in Oracle Endeca Information Discovery. In fact, the Endeca Server powers many of the most popular e-commerce sites in the world. The Endeca Server Supports Cascading Relevance. Traditional approaches of search reduce many relevance weights to a single score. This means that if a result with a good title match gets a similar score to one with an exact phrase match, they’ll appear next to each other in a list. But a user can’t deduce from their score why each got it’s ranking, even though that information could be valuable. Oracle Endeca Information Discovery takes a different approach. The Endeca Server stratifies results by a primary relevance strategy, and then breaks ties within a strata by ordering them with a secondary strategy, and so on. Application managers get the explicit means to compose these strategies based on their knowledge of their own domain. This approach gives both business users and managers a deterministic way to set and understand relevance. Now that you have an understanding of two of the core search capabilities in Oracle Endeca Information Discovery, our next blog post on this topic will discuss more advanced features including set search, second-order relevance as well as an understanding of faceted search mechanisms that include queries and filters.  

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • JQuery tools.scrollable conflict with navbar Ajax in CakePHP

    - by user269098
    In our views/layout/default.ctp file we have an element that's used to implement a cascading combo select box which uses JQuery ajax to update the fields. This selector is part of our navbar and is loaded on every page of the site. The default.ctp has these includes in the : <?php echo $html->charset(); ?> <title><?php echo $title_for_layout; //if(Configure::read('debug')) //echo "cake: " . Configure::version(); ?></title> <?php echo $html->css('style'); echo $javascript->link('jquery', true); echo $javascript->link('jquery.form', true); echo $javascript->link('jquery.url', true); echo $javascript->link('navbar', true); echo $html->script('tools.expose-1.0.3.js', array('inline' => false)); //echo $scripts_for_layout; ?> On the views/cars/index.ctp page we have: echo $javascript->link('tools.tabs-1.0.1', true); echo $javascript->link('tools.overlay-1.0.4', true); echo $javascript->link('jquery.form', true); echo $javascript->link('tools.scrollable-1.0.5', true); The issue is that on every other page the navbar combo box selector works. However, on the cars/index.ctp it doesn't work. It only works when we comment out the 'tools.scrollable' include. Any ideas why there is this conflict? We have been stuck on this problem for months...

    Read the article

  • Deleting JPA entity containing @CollectionOfElements throws ConstraintViolationException

    - by Lyle
    I'm trying to delete entities which contain lists of Integer, and I'm getting ConstraintViolationExceptions because of the foreign key on the table generated to hold the integers. It appears that the delete isn't cascading to the mapped collection. I've done quite a bit of searching, but all of the examples I've seen on how to accomplish this are in reference to a mapped collection of other entities which can be annotated; here I'm just storing a list of Integer. Here is the relevant excerpt from the class I'm storing: @Entity @Table(name="CHANGE_IDS") @GenericGenerator( name = "CHANGE_ID_GEN", strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator", parameters = { @Parameter(name="sequence_name", value="course_changes_seq"), @Parameter(name="increment_size", value="5000"), @Parameter(name=" optimizer", value="pooled") } ) @NamedQueries ({ @NamedQuery( name="Changes.getByStatus", query= "SELECT c " + "FROM DChanges c " + "WHERE c.status = :status "), @NamedQuery( name="Changes.deleteByStatus", query= "DELETE " + "FROM Changes c " + "WHERE c.status = :status ") }) public class Changes { @Id @GeneratedValue(generator="CHANGE_ID_GEN") @Column(name = "ID") private final long id; @Enumerated(EnumType.STRING) @Column(name = "STATUS", length = 20, nullable = false) private final Status status; @Column(name="DOC_ID") @org.hibernate.annotations.CollectionOfElements @org.hibernate.annotations.IndexColumn(name="DOC_ID_ORDER") private List<Integer> docIds; } I'm deleting the Changes objects using a @NamedQuery: final Query deleteQuery = this.entityManager.createNamedQuery("Changes.deleteByStatus"); deleteQuery.setParameter("status", Status.POST_FLIP); final int deleted = deleteQuery.executeUpdate(); this.logger.info("Deleted " + deleted + " POST_FLIP Changes");

    Read the article

  • Setting the target and parent dropdown values from a DB when using CascadingDropDown

    - by Ryan
    Hi, I have two dropdown lists with Cascading Dropdown, in the usual fashion: <asp:DropDownList ID="DropDownListIndustry" runat="server" DataSourceID="SqlDataSourceIndustries" DataTextField="name" DataValueField="industry_id" AppendDataBoundItems=" <asp:ListItem Text="(Please Select)" Value="-1" /> </asp:DropDownList> &nbsp;&nbsp; <ajax:CascadingDropDown ID="CascadingDropDownIndustry" runat="server" ParentControlID="DropDownListIndustry" TargetControlID="DropDownListSubIndustry" ServicePath="AjaxDataProvider.asmx" ServiceMethod="GetSubIndustry" Category="SubIndustry" /> <asp:DropDownList ID="DropDownListSubIndustry" runat="server"/> No surprises there. However, I sometimes want to set the values of the parent and target from a DB (I want to default them, based on a code entered by the user; the whole thing is wrapped in an update panel). So if the user keys in ABC, I look up ABC in the DB and default the Parent DropDown to ID 10 and Child DropDown to ID 101. However, this fails because the child has no items when the server side code runs (the web script method hasn't run, because the content of the parent dd wasn't changed on the client side) Does anybody know how to work around this? Thanks for any help! Ryan

    Read the article

  • Using jQuery and AJAX works for all functions except one, really bizarre issue (from my perspective)

    - by CoreyT
    I am working on a classic asp form that has a number of dropdowns. Three of these are cascading, i.e. they rely in the previous dropdown. For almost everything this code works fine, one of them however is not playing nice. To start off I have a script tag with the following in it: $(document).ready(function () { $("#AcademicLevel").change(getList); $("#CourseDeliveryTime").change(updateLocation); $("#ProgramType").change(updateEntryTerm); }); This works just fine for the first two elements of the form, AcademicLevel and CourseDeliveryTime, the third however does not take effect however. If I use Firebug's Console and run that same line of code, $("#ProgramType").change(updateEntryTerm);, it starts to work, sort of. What happens is what confuses me. If the function it is pointing to, updateEntryTerm, has an alert() call in it, it works. If the alert is commented out, it does not work. The function is below: function updateEntryTerm() { $.ajax({ type: "POST", url: "../Classic ASP and AJAX/jQueryExample.asp", dataType: "application/x-www-form-urlencoded", data: "Action=UpdateEntryTerm&acadLevel=" + $("#AcademicLevel").val() + "&courseTime=" + $("#CourseDeliveryTime").val() + "&programType=" + $("#ProgramType").val(), async: false, success: function (msg) { $("#EntryTerm").remove(); $("#tdEntryTerm").append(msg); //alert(msg); } //, //error: function (xhr, option, err) { // alert("XHR Status: " + xhr.statusText + ", Error - " + err); //} }); } I am lost on two different issues here, First why is the call to $("#ProgramType").change(updateEntryTerm); not working unless I run it in Firebug Console? Second, why does the function itself, updateEntryTerm, not work unless the alert() call is present? Has anyone seem something like this before?

    Read the article

  • Combo-box values automatically update

    - by glinch
    Hi all, hopefully somebody can help The table structure is as follows: tblCompany: compID compName tblOffice: offID, compID, add1, add2, add3 etc... tblEmployee: empID Name, telNo, etc... offID I have a form that contains contact details for employees, all works ok using after update. A cascading combo box, cmbComp, allows me to select a company, and inturn select the appropriate office, cboOff, and updates the corresponding tblEmployee.offID field correctly. Fields are automatically updated for the address also cmbComp: RowSource SELECT DISTINCT tblOffice.compID, tblCompany.compID FROM tblCompany INNER JOIN AdjusterCompanyOffice ON tblCompany.compID=tblOffice.compID ORDER BY tblCompany.compName; cboOff: RowSource SELECT tblCompany.offID, tblCompany.Address1, tblCompany.Address2, tblCompany.Address3, tblCompany.Address4, tblCompany.Address5 FROM tblCompany ORDER BY tblCompany.Address1; The problem I am having is that when i load a new record how to retrieve the data and automatically load the cmbComp and text fields. The cboOff combo box loads correctly as the control source for this is the offID I imagine there must be a way of setting the value on opening the record? Not sure how though. I dont think I can set the controlsource cmbComp or text fields, or can I? Any help/point in the right direction appreciated, have been searching for a way to do this but cant get anywhere!

    Read the article

  • Best approach for a multi-tab ASP.NET AJAX control?

    - by NovaJoe
    Looking for some implementation advice: I have a page that has a 3-tab ajaxToolkit:TabContainer. The purpose of the page is to expose a calculator that has two basic inputs: geo-location and date. The three tabs are labeled "City and State", "Postal Code", and "GPS Coordinates". The layout of each tab container is the same for each tab, with the exception of the location section; the location section changes because each type of location has different inputs. For example, to specify city/state, there will be three fields: city, country, and state (country and state will use cascading drop-down lists). But Postal code requires only one field (which will validate via regular expression for allowed countries). See the example design mockup: So, what I WOULD LIKE to do (in order to minimize duplicate code), is to have a common control that contains the layout and structure of the calculator without specifying anything about the location section. Then, I'd like to be able to pull in each of the unique location controls based on what tab is selected. The tab structure exists at the page level, not in a control. Any advice? I was looking at templated controls (see MSDN article here), but I'm not convinced that it's the right solution. If I HAVE to create three separate controls with similar layouts and common elements, then that's what I have to do. But REALLY, I'd prefer a more elegant, inheritance-based solution. Any advice would be greatly appreciated. Thanks.

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • LinqToSQL: Not possible to update PrimaryKey?

    - by Zuhaib
    I have a simple table (lets call it Table1) that has a NVARCHAR field as the PK. Table1 has no association with any other tables. When I update Table1's PK column using LinqToSQL it fails. If I update other column it succeeds. I could delete this row and insert new one in Table1, but I don't want to. There is a transaction table which has Table1's PK column as a column. When the PK of Table1 is changed I want no effect in the transaction table. But when the row from Table1 is deleted, I want the transaction rows to be deleted. The cascading is done via Trigger. As there is not association between these two tables, if I update the PK column of Table1 using normal SQL, it works and there is no effect on the transaction table as expected. When I delete the row the trigger deletes the rows from transaction table. For this reason I can't delete and then add new row in Table1. So what can be done to successfully update the PrimaryKey of the Table1?

    Read the article

  • Can I get the item type from a BindingSource?

    - by Preston
    I would like to get the Type of item that a BindingSource is hooked up to or configured for. The BindingSource.DataSource property can be set to an object, list, or type. If it is a Type, it obviously does not have an bound item yet, but I would still like to get the Type. For a List, I need the item Type, not the list type. There is a method BindingSource.GetItemProperties, that accomplishes most of what I need. It gets the PropertyDescriptors for the Type, list item, or object specified by the DataSource and DataMember. The reason I ask, is that I have some components that I would like to re-use, but they are currently setup to work off of an item Type. The Type is mainly used to get PropertyInfos and then build a UI, but PropertyInfo and PropertyDescriptor is not exactly the same, so it is not immediately apparent that they can be reworked to use a PropertyDescriptor collection instead. Then there is cascading refactoring of everything already using these components and that adds up to a lot of work that I would rather not do. I've looked through the API docs for for a good way to do this, but so far I have not had any luck. Am I missing something or is this just something I can not or should not be doing?

    Read the article

  • Doctrine 2 entity cannot be saved with its many-to-many related entities

    - by user1333185
    I'm writing a project in ZF and have many-to-many related accounts and videos entities and I provide the account with the videos collection. When I try to save the account I receive the following error: A new entity was found through the relationship 'Entities\Account#playlistVideos' that was not configured to cascade persist operations for entity: Entities\Video@000000004d5f02ef00000000196757dc. Explicitly persist the new entity or configure cascading persist operations on the relationship. If you cannot find out which entity causes the problem implement 'Entities\Video#__toString()' to get a clue. I found a suggested solution with cascade={"persist"} which should allow videos to be saved together with the account by only specifying $this-em-persist($account) but then I get the error: Fatal error: method_exists(): The script tried to execute a method or access a property of an incomplete object. Please ensure that the class definition "Proxies\EntitiesAccountProxy" of the object you are trying to operate on was loaded before unserialize() gets called or provide a __autoload() function to load the class definition in... The same happens when I try to manually persist videos and account. Thank you for the help in advance.

    Read the article

  • How to populate a dropdownlist with json data in jquery?

    - by Pandiya Chendur
    I am developing a country state cascading dropdown list... I returned json result based on countryId but i dont know how to populate/fill it in a new dropdown listbox... Here is what i am using, function getstate(countryId) { $.ajax({ type: "POST", url: "Reg_Form.aspx/Getstates", data: "{'countryId':" + (countryId) + "}", contentType: "application/json; charset=utf-8", global: false, async: false, dataType: "json", success: function(jsonObj) { alert(jsonObj.d); } }); return false; } And alert gave this, {"Table" : [{"stateid" : "2","statename" : "Tamilnadu"}, {"stateid" : "3","statename" : "Karnataka"}, {"stateid" : "4","statename" : "Andaman and Nicobar"}, {"stateid" : "5","statename" : "Andhra Pradesh"}, {"stateid" : "6","statename" : "Arunachal Pradesh"}]} And my aspx page has this is, <td> <asp:DropDownList ID="DLCountry" runat="server" CssClass="dropDownListSkin" onchange="return getstate(this.value);"> </asp:DropDownList> </td> <td> <asp:DropDownList ID="DLState" runat="server" CssClass="dropDownListSkin"> </asp:DropDownList> </td> Any suggestion how to fill it DLState dropdown...

    Read the article

  • Handling multiple media queries in Sass with Twitter Bootstrap

    - by Keith
    I have a Sass mixin for my media queries based on Twitter Bootstrap's responsive media queries: @mixin respond-to($media) { @if $media == handhelds { /* Landscape phones and down */ @media (max-width: 480px) { @content; } } @else if $media == small { /* Landscape phone to portrait tablet */ @media (max-width: 767px) {@content; } } @else if $media == medium { /* Portrait tablet to landscape and desktop */ @media (min-width: 768px) and (max-width: 979px) { @content; } } @else if $media == large { /* Large desktop */ @media (min-width: 1200px) { @content; } } @else { @media only screen and (max-width: #{$media}px) { @content; } } } And I call them throughout my SCSS file like so: .link { color:blue; @include respond-to(medium) { color: red; } } However, sometimes I want to style multiple queries with the same styles. Right now I'm doing them like this: .link { color:blue; /* this is fine for handheld and small sizes*/ /*now I want to change the styles that are cascading to medium and large*/ @include respond-to(medium) { color: red; } @include respond-to(large) { color: red; } } but I'm repeating code so I'm wondering if there is a more concise way to write it so I can target multiple queries. Something like this so I don't need to repeat my code (I know this doesn't work): @include respond-to(medium, large) { color: red; } Any suggestions on the best way to handle this?

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Strange offset space between <button> as parent container and <div> as child.

    - by Maxja
    I need to decorate a standard html button. The base element I took <button> instead of <input>, cos I decided that the element must be a parent container. And there is child element <div> in it. This <div> element will be been the core element for decoration, and should occupy the entire space of the parent element - button. <button> <div>*decoration goes here*</div> </button> And within Cascading Style Sheets it might be looks like this: css button { margin: 0; border: 0; padding: 0; width: *150*px; height: *50*px; position: relative; } div { margin: 0; border: 0; padding: 0; width: 100%; height: 100%; background: *black*; position: absolute; top: 0; left: 0; } html <button type="button"> <div>*decoration goes here*</div> </button> So, Opera(10) is doing well, webkits Chrome(6) and Safari(4) is doing also well, but Internet Explorer(8) is bad - DOM Inspector shows some strange Offset space in top and left, FireFox(3) is also bad - DOM Inspector shows that <div> get some negative value in css-property right: and bottom:. Even if this property will set to zero(0) DOM-Inspector still shows same negative value. I almost broke my brain. Help me, solve this problem, please!

    Read the article

  • jQuery .ajax method executing but not firing error (or success)

    - by John Swaringen
    I'm having a very strange situation. I have a .NET Web Service (.asmx) that I'm trying to call via jQuery ajax to populate some <select></select>'s. The service is running System.Web.Services like it's supposed to and should be returning a JSON list. At least that's the way I understand that it will in .NET 3.5/4.0. The main problem is that the .ajax() function (below) seems to execute but the internal callbacks aren't executing. I know it's executing because I can do something before and after the function and both execute, so JavaScript isn't blowing up. $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: 'http://localhost:59191/AFEDropDownService/Service.asmx/GetAVP', data: "{}", dataType: "jsonp", crossDomain: true, success: function(list) { console.log(list); console.log("success!!"); }, error: function(msg) { console.log("error: " + msg.text); } }); If you need more of the code let me know. Or if you know a better way as I'm trying to build cascading <select></select's.

    Read the article

  • How to emulate a BEFORE DELETE trigger in SQL Server 2005

    - by Mark
    Let's say I have three tables, [ONE], [ONE_TWO], and [TWO]. [ONE_TWO] is a many-to-many join table with only [ONE_ID and [TWO_ID] columns. There are foreign keys set up to link [ONE] to [ONE_TWO] and [TWO] to [ONE_TWO]. The FKs use the ON DELETE CASCADE option so that if either a [ONE] or [TWO] record is deleted, the associated [ONE_TWO] records will be automatically deleted as well. I want to have a trigger on the [TWO] table such that when a [TWO] record is deleted, it executes a stored procedure that takes a [ONE_ID] as a parameter, passing the [ONE_ID] values that were linked to the [TWO_ID] before the delete occurred: DECLARE @Statement NVARCHAR(max) SET @Statement = '' SELECT @Statement = @Statement + N'EXEC [MyProc] ''' + CAST([one_two].[one_id] AS VARCHAR(36)) + '''; ' FROM deleted JOIN [one_two] ON deleted.[two_id] = [one_two].[two_id] EXEC (@Statement) Clearly, I need a BEFORE DELETE trigger, but there is no such thing in SQL Server 2005. I can't use an INSTEAD OF trigger because of the cascading FK. I get the impression that if I use a FOR DELETE trigger, when I join [deleted] to [ONE_TWO] to find the list of [ONE_ID] values, the FK cascade will have already deleted the associated [ONE_TWO] records so I will never find any [ONE_ID] values. Is this true? If so, how can I achieve my objective? I'm thinking that I'd need to change the FK joining [TWO] to [ONE_TWO] to not use cascades and to do the delete from [ONE_TWO] manually in the trigger just before I manually delete the [TWO] records. But I'd rather not go through all that if there is a simpler way.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >