Search Results

Search found 8379 results on 336 pages for 'article'.

Page 327/336 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Forcing a transaction to rollback on validation errors in Seam

    - by Chris Williams
    Quick version: We're looking for a way to force a transaction to rollback when specific situations occur during the execution of a method on a backing bean but we'd like the rollback to happen without having to show the user a generic 500 error page. Instead, we'd like the user to see the form she just submitted and a FacesMessage that indicates what the problem was. Long version: We've got a few backing beans that use components to perform a few related operations in the database (using JPA/Hibernate). During the process, an error can occur after some of the database operations have happened. This could be for a few different reasons but for this question, let's assume there's been a validation error that is detected after some DB writes have happened that weren't detectible before the writes occurred. When this happens, we'd like to make sure all of the db changes up to this point will be rolled back. Seam can deal with this because if you throw a RuntimeException out of the current FacesRequest, Seam will rollback the current transaction. The problem with this is that the user is shown a generic error page. In our case, we'd actually like the user to be shown the page she was on with a descriptive message about what went wrong, and have the opportunity to correct the bad input that caused the problem. The solution we've come up with is to throw an Exception from the component that discovers the validation problem with the annotation: @ApplicationException( rollback = true ) Then our backing bean can catch this exception, assume the component that threw it has published the appropriate FacesMessage, and simply return null to take the user back to the input page with the error displayed. The ApplicationException annotation tells Seam to rollback the transaction and we're not showing the user a generic error page. This worked well for the first place we used it that happened to only be doing inserts. The second place we tried to use it, we have to delete something during the process. In this second case, everything works if there's no validation error. If a validation error does happen, the rollback Exception is thrown and the transaction is marked for rollback. Even if no database modifications have happened to be rolled back, when the user fixes the bad data and re-submits the page, we're getting: java.lang.IllegalArgumentException: Removing a detached instance The detached instance is lazily loaded from another object (there's a many to one relationship). That parent object is loaded when the backing bean is instantiated. Because the transaction was rolled back after the validation error, the object is now detached. Our next step was to change this page from conversation scope to page scope. When we did this, Seam can't even render the page after the validation error because our page has to hit the DB to render and the transaction has been marked for rollback. So my question is: how are other people dealing with handling errors cleanly and properly managing transactions at the same time? Better yet, I'd love to be able to use everything we have now if someone can spot something I'm doing wrong that would be relatively easy to fix. I've read the Seam Framework article on Unified error page and exception handling but this is geared more towards more generic errors your application might encounter.

    Read the article

  • Resizing Images In a Table Using JavaScript

    - by Abluescarab
    Hey, all. I apologize beforehand if none of this makes sense, or if I sound incompetent... I've been making webpages for a while, but don't know any JavaScript. I am trying to make a webpage for a friend, and he requested if I could write some code to resize the images in the page based on the user's screen resolution. I did some research on this question, and it's kind of similar to this, this, and this. Unfortunately, those articles didn't answer my question entirely because none of the methods described worked. Right now, I have a three-column webpage with 10 images in a table in the left sidebar, and when I use percentages for their sizes, they don't resize right between monitors. As such, I'm trying to write a JavaScript function that changes their sizes after detecting the screen resolution. The code is stripped from my post if I put it all, so I'll just say that each image links to a website and uses a separate image to change color when you hover over it. Would I have to address both images to change their sizes correctly? I use a JavaScript function to switch between them. Anyway, I tried both methods in this article and neither worked for me. If it helps, I'm using Google Chrome, but I'm trying to make this page as cross-browser as possible. Here's the code I have so far in my JavaScript function: function resizeImages() { var w = window.width; var h = window.height; var yuk = document.getElementById('yuk').style; var wb = document.getElementById('wb').style; var tf = document.getElementById('tf').style; var lh = document.getElementById('lh').style; var ko = document.getElementById('ko').style; var gz = document.getElementById('gz').style; var fb = document.getElementById('fb').style; var eg = document.getElementById('eg').style; var dl = document.getElementById('dl').style; var da = document.getElementById('da').style; if (w = "800" && h = "600") { } else if (w = "1024" && h = "768") { } else if (w = "1152" && h = "864") { } else if (w = "1280" && h = "720") { } else if (w = "1280" && h = "768") { } else if (w = "1280" && h = "800") { } else if (w = "1280" && h = "960") { } else if (w = "1280" && h = "1024") { } } Yeah, I don't have much in it because I don't know if I'm doing it right yet. Is this a way I can detect the "width" and "height" properties of a window? The "yuk", "wb", etcetera are the images I'm trying to change the size of. To sum it up: I want to resize images based on screen resolution using JavaScript, but my research attempts have been... futile. I'm sorry if that was long-winded, but thanks ahead of time!

    Read the article

  • Facelet selectOneMenu with POJOs and converter problem

    - by c0d3x
    Hi, I want to have a facelet page with a formular and a dropdown menu. With the dropdown menu the user shoul select a POJO of the type Lieferant: public class Lieferant extends AbstractPersistentWarenwirtschaftsObject { private String firma; public Lieferant(WarenwirtschaftDatabaseLayer database, String firma) { this(database, null, firma); } public Lieferant(WarenwirtschaftDatabaseLayer database, Long primaryKey, String firma) { super(database, primaryKey); this.firma = firma; } public String getFirma() { return firma; } @Override public String toString() { return getFirma(); } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((firma == null) ? 0 : firma.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Lieferant other = (Lieferant) obj; if (firma == null) { if (other.firma != null) return false; } else if (!firma.equals(other.firma)) return false; return true; } } Here is the facelet code that I wrote: <h:selectOneMenu> tag. This tag should display a list of POJOs (not beans) of the type Lieferant. Here is the facelet code: <h:selectOneMenu id="lieferant" value="#{lieferantenBestellungBackingBean.lieferant}"> <f:selectItems var="lieferant" value="#{lieferantenBackingBean.lieferanten}" itemLabel="#{lieferant.firma}" itemValue="#{lieferant.primaryKey}" /> <f:converter converterId="LieferantConverter" /> </h:selectOneMenu> Here is the refenrenced managed backing bean @ManagedBean @RequestScoped public class LieferantenBackingBean extends AbstractWarenwirtschaftsBackingBean { private List<Lieferant> lieferanten; public List<Lieferant> getLieferanten() { if (lieferanten == null) { lieferanten = getApplication().getLieferanten(); } return lieferanten; } } As far as I know, I need a custom converter, to swap beetween POJO and String representations of the Lieferant objects. Here is what the converter looks like: @FacesConverter(value="LieferantConverter") public class LieferantConverter implements Converter { @Override public Object getAsObject(FacesContext context, UIComponent component, String value) { long primaryKey = Long.parseLong(value); WarenwirtschaftApplicationLayer application = WarenwirtschaftApplication.getInstance(); Lieferant lieferant = application.getLieferant(primaryKey); return lieferant; } @Override public String getAsString(FacesContext context, UIComponent component, Object value) { return value.toString(); } } The page loads without any error. When I fill out the formular and submit it, there is an error message displayed on the page: Bestellung:lieferantenBestellungForm:lieferant: Validierungsfehler: Wert ist keine gültige Auswahl translated: validation error: value is not a valid selection Unfortunaltely it does not say which value it is talking about. The converter seems to work correctly. I found this similar question from stackoverflow and this article about selectOneMenu and converters, but I was not able to find the problem in my code. Why is List<SelectItem> used in the example from the second link. Gives the same error for me. Any help would be appreciated. Thanks in advance.

    Read the article

  • A ResetBindings on a BindingSource of a Grid also resets ComboBox

    - by Tetsuo
    Hi there, I have a DataGridView with a BindingSource of products. This products have an enum (Producer). For the most text fields (to edit the product) below the DataGridView I have a method RefreshProduct which does a ResetBindings in the end to refresh the DataGridView. There is a ComboBox (cboProducer), too. If I run over the _orderBs.ResetBindings(false) it will reset my cboProducer outside the DataGridView, too. Could you please help me to avoid this? Here follows some code; maybe it is then better to understand. public partial class SelectProducts : UserControl { private AutoCompleteStringCollection _productCollection; private ProductBL _productBL; private OrderBL _orderBL; private SortableBindingList<ProductBE> _listProducts; private ProductBE _selectedProduct; private OrderBE _order; BindingSource _orderBs = new BindingSource(); public SelectProducts() { InitializeComponent(); if (_productBL == null) _productBL = new ProductBL(); if (_orderBL == null) _orderBL = new OrderBL(); if (_productCollection == null) _productCollection = new AutoCompleteStringCollection(); if (_order == null) _order = new OrderBE(); if (_listProducts == null) { _listProducts = _order.ProductList; _orderBs.DataSource = _order; grdOrder.DataSource = _orderBs; grdOrder.DataMember = "ProductList"; } } private void cmdGetProduct_Click(object sender, EventArgs e) { ProductBE product = _productBL.Load(txtProductNumber.Text); _listProducts.Add(product); _orderBs.ResetBindings(false); } private void grdOrder_SelectionChanged(object sender, EventArgs e) { if (grdOrder.SelectedRows.Count > 0) { _selectedProduct = (ProductBE)((DataGridView)(sender)).CurrentRow.DataBoundItem; if (_selectedProduct != null) { txtArticleNumber.Text = _selectedProduct.Article; txtPrice.Text = _selectedProduct.Price.ToString("C"); txtProducerNew.Text = _selectedProduct.ProducerText; cboProducer.DataSource = Enum.GetValues(typeof(Producer)); cboProducer.SelectedItem = _selectedProduct.Producer; } } } private void txtProducerNew_Leave(object sender, EventArgs e) { string property = CommonMethods.GetPropertyName(() => new ProductBE().ProducerText); RefreshProduct(((TextBoxBase)sender).Text, property); } private void RefreshProduct(object value, string property) { if (_selectedProduct != null) { double valueOfDouble; if (double.TryParse(value.ToString(), out valueOfDouble)) { value = valueOfDouble; } Type type = _selectedProduct.GetType(); PropertyInfo info = type.GetProperty(property); if (info.PropertyType.BaseType == typeof(Enum)) { value = Enum.Parse(info.PropertyType, value.ToString()); } try { Convert.ChangeType(value, info.PropertyType, new CultureInfo("de-DE")); info.SetValue(_selectedProduct, value, null); } catch (Exception ex) { throw new WrongFormatException("\"" + value.ToString() + "\" is not a valid value.", ex); } var produktFromList = _listProducts.Single(p => p.Position == _selectedProduct.Position); info.SetValue(produktFromList, value, null); _orderBs.ResetBindings(false); } } private void cboProducer_SelectedIndexChanged(object sender, EventArgs e) { var selectedIndex = ((ComboBox)(sender)).SelectedIndex; switch ((Producer)selectedIndex) { case Producer.ABC: txtProducerNew.Text = Constants.ABC; break; case Producer.DEF: txtProducerNew.Text = Constants.DEF; break; case Producer.GHI: txtProducerNew.Text = Constants.GHI; break; case Producer.Another: txtProducerNew.Text = String.Empty; break; default: break; } string property = CommonMethods.GetPropertyName(() => new ProductBE().Producer); RefreshProduct(selectedIndex, property); } }

    Read the article

  • Two-way databinding of a custom templated control. Eval works, but not Bind.

    - by Jason
    I hate long code snippets and I'm sorry about this one, but it turns out that this asp.net stuff can't get much shorter and it's so specific that I haven't been able to generalize it without a full code listing. I just want simple two-way, declarative databinding to a single instance of an object. Not a list of objects of a type with a bunch of NotImplementedExceptions for Add, Delete, and Select, but just a single view-state persisted object. This is certainly something that can be done but I've struggled with an implementation for years. This newest, closest implementation was inspired by this article from 4-Guys-From-Rolla, http://msdn.microsoft.com/en-us/library/aa478964.aspx. Unfortunately, after implementing, I'm getting the following error and I don't know what I'm missing: System.InvalidOperationException: Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control. If I don't use Bind(), and only use Eval() functionality, it works. In that way, the error is especially confusing. Here's the simplified codeset that still produces the error: using System.ComponentModel; namespace System.Web.UI.WebControls.Special { public class SampleFormData { public string SampleString = "Sample String Data"; public int SampleInt = -1; } [ToolboxItem(false)] public class SampleSpecificFormDataContainer : WebControl, INamingContainer { SampleSpecificEntryForm entryForm; internal SampleSpecificEntryForm EntryForm { get { return entryForm; } } [Bindable(true), Category("Data")] public string SampleString { get { return entryForm.FormData.SampleString; } set { entryForm.FormData.SampleString = value; } } [Bindable(true), Category("Data")] public int SampleInt { get { return entryForm.FormData.SampleInt; } set { entryForm.FormData.SampleInt = value; } } internal SampleSpecificFormDataContainer(SampleSpecificEntryForm entryForm) { this.entryForm = entryForm; } } public class SampleSpecificEntryForm : WebControl, INamingContainer { #region Template private IBindableTemplate formTemplate = null; [Browsable(false), DefaultValue(null), TemplateContainer(typeof(SampleSpecificFormDataContainer), ComponentModel.BindingDirection.TwoWay), PersistenceMode(PersistenceMode.InnerProperty)] public virtual IBindableTemplate FormTemplate { get { return formTemplate; } set { formTemplate = value; } } #endregion #region Viewstate SampleFormData FormDataVS { get { return (ViewState["FormData"] as SampleFormData) ?? new SampleFormData(); } set { ViewState["FormData"] = value; SaveViewState(); } } #endregion public override ControlCollection Controls { get { EnsureChildControls(); return base.Controls; } } private SampleSpecificFormDataContainer formDataContainer = null; [Browsable(false), DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)] public SampleSpecificFormDataContainer FormDataContainer { get { EnsureChildControls(); return formDataContainer; } } [Bindable(true), Browsable(false)] public SampleFormData FormData { get { return FormDataVS; } set { FormDataVS = value; } } protected override void CreateChildControls() { if (!this.ChildControlsCreated) { Controls.Clear(); formDataContainer = new SampleSpecificFormDataContainer(this); Controls.Add(formDataContainer); FormTemplate.InstantiateIn(formDataContainer); this.ChildControlsCreated = true; } } public override void DataBind() { CreateChildControls(); base.DataBind(); } } } With an ASP.NET page the following: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default2.aspx.cs" Inherits="EntryFormTest._Default2" EnableEventValidation="false" %> <%@ Register Assembly="EntryForm" Namespace="System.Web.UI.WebControls.Special" TagPrefix="cc1" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Welcome to ASP.NET! </h2> <cc1:SampleSpecificEntryForm ID="EntryForm1" runat="server"> <FormTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("SampleString") %>'></asp:TextBox><br /> <h3>(<%# Container.SampleString %>)</h3><br /> <asp:Button ID="Button1" runat="server" Text="Button" /> </FormTemplate> </cc1:SampleSpecificEntryForm> </asp:Content> Default2.aspx.cs using System; namespace EntryFormTest { public partial class _Default2 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { EntryForm1.DataBind(); } } } Thanks for any help!

    Read the article

  • Picking good first estimates for Goldschmidt division

    - by Mads Elvheim
    I'm calculating fixedpoint reciprocals in Q22.10 with Goldschmidt division for use in my software rasterizer on ARM. This is done by just setting the nominator to 1, i.e the nominator becomes the scalar on the first iteration. To be honest, I'm kind of following the wikipedia algorithm blindly here. The article says that if the denominator is scaled in the half-open range (0.5, 1.0], a good first estimate can be based on the denominator alone: Let F be the estimated scalar and D be the denominator, then F = 2 - D. But when doing this, I lose a lot of precision. Say if I want to find the reciprocal of 512.00002f. In order to scale the number down, I lose 10 bits of precision in the fraction part, which is shifted out. So, my questions are: Is there a way to pick a better estimate which does not require normalization? Also, is it possible to pre-calculate the first estimates so the series converges faster? Right now, it converges after the 4th iteration on average. On ARM this is about ~50 cycles worst case, and that's not taking emulation of clz/bsr into account, nor memory lookups. Here is my testcase. Note: The software implementation of clz on line 13 is from my post here. You can replace it with an intrinsic if you want. #include <stdio.h> #include <stdint.h> const unsigned int BASE = 22ULL; static unsigned int divfp(unsigned int val, int* iter) { /* Nominator, denominator, estimate scalar and previous denominator */ unsigned long long N,D,F, DPREV; int bitpos; *iter = 1; D = val; /* Get the shift amount + is right-shift, - is left-shift. */ bitpos = 31 - clz(val) - BASE; /* Normalize into the half-range (0.5, 1.0] */ if(0 < bitpos) D >>= bitpos; else D <<= (-bitpos); /* (FNi / FDi) == (FN(i+1) / FD(i+1)) */ /* F = 2 - D */ F = (2ULL<<BASE) - D; /* N = F for the first iteration, because the nominator is simply 1. So don't waste a 64-bit UMULL on a multiply with 1 */ N = F; D = ((unsigned long long)D*F)>>BASE; while(1){ DPREV = D; F = (2<<(BASE)) - D; D = ((unsigned long long)D*F)>>BASE; /* Bail when we get the same value for two denominators in a row. This means that the error is too small to make any further progress. */ if(D == DPREV) break; N = ((unsigned long long)N*F)>>BASE; *iter = *iter + 1; } if(0 < bitpos) N >>= bitpos; else N <<= (-bitpos); return N; } int main(int argc, char* argv[]) { double fv, fa; int iter; unsigned int D, result; sscanf(argv[1], "%lf", &fv); D = fv*(double)(1<<BASE); result = divfp(D, &iter); fa = (double)result / (double)(1UL << BASE); printf("Value: %8.8lf 1/value: %8.8lf FP value: 0x%.8X\n", fv, fa, result); printf("iteration: %d\n",iter); return 0; }

    Read the article

  • Why does this Quicksort work?

    - by IVlad
    I find this Quicksort partitioning approach confusing and wrong, yet it seems to work. I am referring to this pseudocode. Note: they also have a C implementation at the end of the article, but it's very different from their pseudocode, so I don't care about that. I have also written it in C like this, trying to stay true to the pseudocode as much as possible, even if that means doing some weird C stuff: #include <stdio.h> int partition(int a[], int p, int r) { int x = a[p]; int i = p - 1; int j = r + 1; while (1) { do j = j - 1; while (!(a[j] <= x)); do i = i + 1; while (!(a[i] >= x)); if (i < j) { int t = a[i]; a[i] = a[j]; a[j] = t; } else { for (i = 1; i <= a[0]; ++i) printf("%d ", a[i]); printf("- %d\n", j); return j; } } } int main() { int a[100] = //{8, 6,10,13,15,8,3,2,12}; {7, 7, 6, 2, 3, 8, 4, 1}; partition(a, 1, a[0]); return 0; } If you run this, you'll get the following output: 1 6 2 3 4 8 7 - 5 However, this is wrong, isn't it? Clearly a[5] does not have all the values before it lower than it, since a[2] = 6 > a[5] = 4. Not to mention that 7 is supposed to be the pivot (the initial a[p]) and yet its position is both incorrect and lost. The following partition algorithm is taken from wikipedia: int partition2(int a[], int p, int r) { int x = a[r]; int store = p; for (int i = p; i < r; ++i) { if (a[i] <= x) { int t = a[i]; a[i] = a[store]; a[store] = t; ++store; } } int t = a[r]; a[r] = a[store]; a[store] = t; for (int i = 1; i <= a[0]; ++i) printf("%d ", a[i]); printf("- %d\n", store); return store; } And produces this output: 1 6 2 3 8 4 7 - 1 Which is a correct result in my opinion: the pivot (a[r] = a[7]) has reached its final position. However, if I use the initial partitioning function in the following algorithm: void Quicksort(int a[], int p, int r) { if (p < r) { int q = partition(a, p, r); // initial partitioning function Quicksort(a, p, q); Quicksort(a, q + 1, r); // I'm pretty sure q + r was a typo, it doesn't work with q + r. } } ... it seems to be a correct sorting algorithm. I tested it out on a lot of random inputs, including all 0-1 arrays of length 20. I have also tried using this partition function for a selection algorithm, in which it failed to produce correct results. It seems to work and it's even very fast as part of the quicksort algorithm however. So my questions are: Can anyone post an example on which the algorithm DOESN'T work? If not, why does it work, since the partitioning part seems to be wrong? Is this another partitioning approach that I don't know about?

    Read the article

  • ASP.NET MVC 3 embrace dynamic type - CSDN.NET - CSDN Software Development Channel

    - by user559071
    About a decade ago, Microsoft will all bet on the WebForms and static types. With the complete package from scattered to the continuous development, and now almost every page can be viewed as its own procedure. Subsequent years, the industry continued to move in another direction, love is better than separation package, better than the late binding early binding to the idea. This leads to two very interesting questions. The first is the problem of terminology. Consider the original Smalltalk MVC model, view and controller is not only tightly coupled together, and usually in pairs. Most of the framework is that Microsoft, including the classic VB, WinForms, WebForms, WPF and Silverlight, they both use the code behind file to store the controller logic. But said "MVC" usually refers to the view and controller are loosely coupled framework. This is especially true for the Web framework, HTML form submission mechanism allows any views submitted to any of the controller. Since this article was mainly talking about Web technologies, so we need to use the modern definition. The second question is "If you're Microsoft, how to change orbit without causing too much pressure to the developer?" So far, the answer is: new releases each year, until the developers meet up. ASP.NET MVC's first product was released last March. Released in March this year, ASP.NET MVC 2.0. 3.0 RC 2 is currently in phase, expected to be released next March. December 10, Microsoft released ASP.NET MVC 3.0 Release Candidate 2. RC 2 is built on top of Microsoft's commitment to the jQuery: The default project template into jQuery 1.4.4, jQuery Validation 1.7 and jQuery UI. Although people think that Microsoft will focus shifted away from server-side controls to be a joke, but the introduction of Microsoft's jQuery UI is that this is the real thing. For those worried about the scalability of the developers, there are many excellent control can replace the session state. With SessionState property, you can tell the controller session state is read-only, read-write, or can be completely ignored in the. This site is no single server, but if a server needs to access another server session state, then this approach can provide a great help. MVC 3 contains Razor view engine. By default, the engine will be encoded HTML output, so that we can easily output on the screen the text of the original. HTML injection attacks even without the risk of encoded text can not easily prevent the page rendering. For many C # developers in the end do what is most shocking that MVC 3 for the controller and view and embrace the dynamic type. ViewBag property will open a dynamic object, developers can run on top of the object to add attributes. In general, it is used to send the view from the controller non-mode data. Scott Guthrie provides state of the sample contains text (such as the current time) and used to assemble the list box entries. Asked Link: http://www.infoq.com/cn/news/2010/12/ASPNET-MVC-3-RC-2; jsessionid = 3561C3B7957F1FB97848950809AD9483

    Read the article

  • reloading page while an ajax request in progress gives empty response and status as zero

    - by Jayapal Chandran
    Hi, Browser is firefox 3.0.10 I am requesting a page using ajax. The response is in progress may be in readyState less than 4. In this mean time i am trying to reload the page. What happens is the request ends giving an empty response. I used alert to find what string has been given as response text. I assume that by this time the ready state 4 is reached. why it is empty string. when i alert the xmlhttpobject.status it displayed 0. when i alert the xmlhttpobject.statusText an exception occurs stating that NOT AVAILABLE. when i read in the document http://www.devx.com/webdev/Article/33024/0/page/2 it said for 3 and 4 status and statusText are available but when i tested only status is available but not satausText Here is a sample code. consider that i have requested a page and my callback function is as follows function cb(rt) { if(rt.readyState==4) { alert(rt.status); alert(rt.statusText); // which throws an exception } } and my server side script is as follows sleep(30); //flushing little drop down code besides these i noticed the following... assume again i am requesting the above script using ajax. now there will be an idle time till 30 seconds is over before that 30 seconds i press refresh. i got xmlhttpobject.status as 0 but still the browser did not reload the page untill that 30 seconds. WHY? so what is happening when i refresh a page before an ajax request is complete is the status value is set to zero and the ready state is set to 4 but the page still waits for the response from the server to end... what is happening... THE REASON FOR ME TO FACE SOME THING LIKE THIS IS AS FOLLOWS. when ever i do an ajax request ... if the process succeeded like inserting some thing or deleting something i popup a div stating that updated successfully and i will reload the page. but if there is any error then i do not reload the page instead i just alert that unable to process this request. what happens if the user reloads the page before any of this request is complete is i get an empty response which in my calculation is there is a server error. so i was debugging the ajax response to filter out that the connection has been interrupted because the user had pressed reload. so in this time i don't want to display unable to process this request when the user reloads the page before the request has been complete. oh... a long story. IT IS A LONG DESCRIPTION SO THAT I CAN MAKE EXPERTS UNDERSTAND MY DOUBT. so what i want form the above. any type of answer would clear my mind. or i would like to say all type of answers. EDIT: 19 dec. If i did not get any correct answer then i would delete this question and will rewrite with examples. else i will accept after experimenting.

    Read the article

  • OutOfMemoryError during the pdf merge

    - by Vijay
    the below code merges the pdf files and returns the combined pdf data. while this code runs, i try to combine the 100 files with each file approximately around 500kb, i get outofmemory error in the line document.close();. this code runs in the web environment, is the memory available to webspehere server is the problem? i read in an article to use freeReader method, but i cannot get how to use it my scenario. protected ByteArrayOutputStream joinPDFs(List<InputStream> pdfStreams, boolean paginate) { Document document = new Document(); ByteArrayOutputStream mergedPdfStream = new ByteArrayOutputStream(); try { //List<InputStream> pdfs = pdfStreams; List<PdfReader> readers = new ArrayList<PdfReader>(); int totalPages = 0; //Iterator<InputStream> iteratorPDFs = pdfs.iterator(); Iterator<InputStream> iteratorPDFs = pdfStreams.iterator(); // Create Readers for the pdfs. while (iteratorPDFs.hasNext()) { InputStream pdf = iteratorPDFs.next(); if (pdf == null) continue; PdfReader pdfReader = new PdfReader(pdf); readers.add(pdfReader); totalPages += pdfReader.getNumberOfPages(); } //clear this pdfStreams = null; //WeakReference ref = new WeakReference(pdfs); //ref.clear(); // Create a writer for the outputstream PdfWriter writer = PdfWriter.getInstance(document, mergedPdfStream); writer.setFullCompression(); document.open(); BaseFont bf = BaseFont.createFont(BaseFont.HELVETICA, BaseFont.CP1252, BaseFont.NOT_EMBEDDED); PdfContentByte cb = writer.getDirectContent(); // Holds the PDF // data PdfImportedPage page; int currentPageNumber = 0; int pageOfCurrentReaderPDF = 0; Iterator<PdfReader> iteratorPDFReader = readers.iterator(); // Loop through the PDF files and add to the output. while (iteratorPDFReader.hasNext()) { PdfReader pdfReader = iteratorPDFReader.next(); // Create a new page in the target for each source page. while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) { pageOfCurrentReaderPDF++; document.setPageSize(pdfReader .getPageSizeWithRotation(pageOfCurrentReaderPDF)); document.newPage(); // pageOfCurrentReaderPDF++; currentPageNumber++; page = writer.getImportedPage(pdfReader, pageOfCurrentReaderPDF); cb.addTemplate(page, 0, 0); // Code for pagination. if (paginate) { cb.beginText(); cb.setFontAndSize(bf, 9); cb.showTextAligned(PdfContentByte.ALIGN_CENTER, "" + currentPageNumber + " of " + totalPages, 520, 5, 0); cb.endText(); } } pageOfCurrentReaderPDF = 0; System.out.println("now the size is: "+pdfReader.getFileLength()); } mergedPdfStream.flush(); document.close(); mergedPdfStream.close(); return mergedPdfStream; } catch (Exception e) { e.printStackTrace(); } finally { if (document.isOpen()) document.close(); try { if (mergedPdfStream != null) mergedPdfStream.close(); } catch (IOException ioe) { ioe.printStackTrace(); } } return mergedPdfStream; } Thanks V

    Read the article

  • How can I add a previous button to this Jquery Content Slider?

    - by user1269988
    I did this nice tutorial for a Jquery Content Slider: http://brenelz.com/blog/build-a-content-slider-with-jquery/ Here is my test page: http://www.gregquinn.com/oneworld/brenez_slider_test.html But the Left button is hidden on the first slide and I do not want it to be. I don't know much about jquery but I tried to set the left button from opacity o to 100 or 1 and it didn't work the button showed up once but did not work. Does anyone know how to do this? Here is the code: (function($) { $.fn.ContentSlider = function(options) { var defaults = { leftBtn : 'images/panel_previous_btn.gif', rightBtn : 'images/panel_next_btn.gif', width : '900px', height : '400px', speed : 400, easing : 'easeOutQuad', textResize : false, IE_h2 : '26px', IE_p : '11px' } var defaultWidth = defaults.width; var o = $.extend(defaults, options); var w = parseInt(o.width); var n = this.children('.cs_wrapper').children('.cs_slider').children('.cs_article').length; var x = -1*w*n+w; // Minimum left value var p = parseInt(o.width)/parseInt(defaultWidth); var thisInstance = this.attr('id'); var inuse = false; // Prevents colliding animations function moveSlider(d, b) { var l = parseInt(b.siblings('.cs_wrapper').children('.cs_slider').css('left')); if(isNaN(l)) { var l = 0; } var m = (d=='left') ? l-w : l+w; if(m<=0&&m>=x) { b .siblings('.cs_wrapper') .children('.cs_slider') .animate({ 'left':m+'px' }, o.speed, o.easing, function() { inuse=false; }); if(b.attr('class')=='cs_leftBtn') { var thisBtn = $('#'+thisInstance+' .cs_leftBtn'); var otherBtn = $('#'+thisInstance+' .cs_rightBtn'); } else { var thisBtn = $('#'+thisInstance+' .cs_rightBtn'); var otherBtn = $('#'+thisInstance+' .cs_leftBtn'); } if(m==0||m==x) { thisBtn.animate({ 'opacity':'0' }, o.speed, o.easing, function() { thisBtn.hide(); }); } if(otherBtn.css('opacity')=='0') { otherBtn.show().animate({ 'opacity':'1' }, { duration:o.speed, easing:o.easing }); } } } function vCenterBtns(b) { // Safari and IE don't seem to like the CSS used to vertically center // the buttons, so we'll force it with this function var mid = parseInt(o.height)/2; b .find('.cs_leftBtn img').css({ 'top':mid+'px', 'padding':0 }).end() .find('.cs_rightBtn img').css({ 'top':mid+'px', 'padding':0 }); } return this.each(function() { $(this) // Set the width and height of the div to the defined size .css({ width:o.width, height:o.height }) // Add the buttons to move left and right .prepend('<a href="#" class="cs_leftBtn"><img src="'+o.leftBtn+'" /></a>') .append('<a href="#" class="cs_rightBtn"><img src="'+o.rightBtn+'" /></a>') // Dig down to the article div elements .find('.cs_article') // Set the width and height of the div to the defined size .css({ width:o.width, height:o.height }) .end() // Animate the entrance of the buttons .find('.cs_leftBtn') .css('opacity','0') .hide() .end() .find('.cs_rightBtn') .hide() .animate({ 'width':'show' }); // Resize the font to match the bounding box if(o.textResize===true) { var h2FontSize = $(this).find('h2').css('font-size'); var pFontSize = $(this).find('p').css('font-size'); $.each(jQuery.browser, function(i) { if($.browser.msie) { h2FontSize = o.IE_h2; pFontSize = o.IE_p; } }); $(this).find('h2').css({ 'font-size' : parseFloat(h2FontSize)*p+'px', 'margin-left' : '66%' }); $(this).find('p').css({ 'font-size' : parseFloat(pFontSize)*p+'px', 'margin-left' : '66%' }); $(this).find('.readmore').css({ 'font-size' : parseFloat(pFontSize)*p+'px', 'margin-left' : '66%' }); } // Store a copy of the button in a variable to pass to moveSlider() var leftBtn = $(this).children('.cs_leftBtn'); leftBtn.bind('click', function() { if(inuse===false) { inuse = true; moveSlider('right', leftBtn); } return false; // Keep the link from firing }); // Store a copy of the button in a variable to pass to moveSlider() var rightBtn = $(this).children('.cs_rightBtn'); rightBtn.bind('click', function() { if(inuse===false) { inuse=true; moveSlider('left', rightBtn); } return false; // Keep the link from firing }); }); } })(jQuery)

    Read the article

  • Resolving a Forward Declaration Issue Involving a State Machine in C++

    - by hypersonicninja
    I've recently returned to C++ development after a hiatus, and have a question regarding implementation of the State Design Pattern. I'm using the vanilla pattern, exactly as per the GoF book. My problem is that the state machine itself is based on some hardware used as part of an embedded system - so the design is fixed and can't be changed. This results in a circular dependency between two of the states (in particular), and I'm trying to resolve this. Here's the simplified code (note that I tried to resolve this by using headers as usual but still had problems - I've omitted them in this code snippet): #include <iostream> #include <memory> using namespace std; class Context { public: friend class State; Context() { } private: State* m_state; }; class State { public: State() { } virtual void Trigger1() = 0; virtual void Trigger2() = 0; }; class LLT : public State { public: LLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class ALL : public State { public: ALL() { } void Trigger1() { new LLT(); } void Trigger2() { new DH(); } }; // DL needs to 'know' about DH. class DL : public State { public: DL() { } void Trigger1() { new ALL(); } void Trigger2() { new DH(); } }; class HLT : public State { public: HLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class AHL : public State { public: AHL() { } void Trigger1() { new DH(); } void Trigger2() { new HLT(); } }; // DH needs to 'know' about DL. class DH : public State { public: DH () { } void Trigger1() { new AHL(); } void Trigger2() { new DL(); } }; int main() { auto_ptr<LLT> llt (new LLT); auto_ptr<ALL> all (new ALL); auto_ptr<DL> dl (new DL); auto_ptr<HLT> hlt (new HLT); auto_ptr<AHL> ahl (new AHL); auto_ptr<DH> dh (new DH); return 0; } The problem is basically that in the State Pattern, state transitions are made by invoking the the ChangeState method in the Context class, which invokes the constructor of the next state. Because of the circular dependency, I can't invoke the constructor because it's not possible to pre-define both of the constructors of the 'problem' states. I had a look at this article, and the template method which seemed to be the ideal solution - but it doesn't compile and my knowledge of templates is a rather limited... The other idea I had is to try and introduce a Helper class to the subclassed states, via multiple inheritance, to see if it's possible to specify the base class's constructor and have a reference to the state subclasse's constructor. But I think that was rather ambitious... Finally, would a direct implmentation of the Factory Method Design Pattern be the best way to resolve the entire problem?

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • Java SWT - placing image buttons on the image background

    - by foma
    I am trying to put buttons with images(gif) on the background which has already been set as an image (shell.setBackgroundImage(image)) and I can't figure out how to remove transparent border around buttons with images. I would be grateful if somebody could give me some tip about this issue. Here is my code: import org.eclipse.swt.widgets.*; import org.eclipse.swt.layout.*; import org.eclipse.swt.*; import org.eclipse.swt.graphics.*; public class Main_page { public static void main(String[] args) { Display display = new Display(); Shell shell = new Shell(display); Image image = new Image(display, "bg.gif"); shell.setBackgroundImage(image); shell.setBackgroundMode(SWT.INHERIT_DEFAULT); shell.setFullScreen(true); Button button = new Button(shell, SWT.PUSH); button.setImage(new Image(display, "button.gif")); RowLayout Layout = new RowLayout(); shell.setLayout(Layout); shell.open(); while (!shell.isDisposed()) { if (!display.readAndDispatch()) display.sleep(); } display.dispose(); } } Sorceror, thanks for your answer I will definitely look into this article. Maybe I will find my way. So far I have improved my code a little bit. Firstly, I managed to get rid of the gray background noise. Secondly, I finally succeeded in creating the button as I had seen it in the first place. Yet, another obstacle has arisen. When I removed image(button) transparent border it turned out that the button change its mode(from push button to check box). The problem is that I came so close to the thing I was looking for and now I am a little puzzled. If you have some time please give a glance at my code. Here is the code, if you launch it you will see what the problem is(hope you didn't have problems downloading images): import org.eclipse.swt.widgets.*; import org.eclipse.swt.layout.*; import org.eclipse.swt.*; import org.eclipse.swt.graphics.*; public class Main_page { public static void main(String[] args) { Display display = new Display(); Shell shell = new Shell(display); Image image = new Image(display, "bg.gif"); // Launch on a screen 1280x1024 shell.setBackgroundImage(image); shell.setBackgroundMode(SWT.TRANSPARENT); shell.setFullScreen(true); GridLayout gridLayout = new GridLayout(); gridLayout.marginTop = 200; gridLayout.marginLeft = 20; shell.setLayout(gridLayout); // If you replace SWT.PUSH with SWT.COLOR_TITLE_INACTIVE_BACKGROUND // you will see what I am looking for, despite nasty check box Button button = new Button(shell, SWT.PUSH); button.setImage(new Image(display, "moneyfast.gif")); shell.open(); while (!shell.isDisposed()) { if (!display.readAndDispatch()) display.sleep(); } display.dispose(); }

    Read the article

  • How can I determine/use $(this) in js callback script

    - by Rabbott
    I am using Rails and jQuery, making an ajax call initiated by clicking a link. I setup my application.js file to look like the one proposed here and it works great. The problem I'm having is how can I use $(this) in my say.. update.js.erb file to represent the link I clicked? I don't want to have to assign an ID to every one, then recompile that id in the callback script.. EDIT To give a simple example of something similar to what I'm trying to do (and much easier to explain): If a user clicks on a link, that deletes that element from a list, the controller would handle the callback, and the callback (which is in question here) would delete the element I clicked on, so in the callback delete.js.erb would just say $(this).fadeOut(); This is why I want to use $(this) so that I dont have to assign an ID to every element (which would be the end of the world, just more verbose markup) application.js jQuery.ajaxSetup({ 'beforeSend': function(xhr) {xhr.setRequestHeader("Accept", "text/javascript,application/javascript,text/html")} }) function _ajax_request(url, data, callback, type, method) { if (jQuery.isFunction(data)) { callback = data; data = {}; } return jQuery.ajax({ type: method, url: url, data: data, success: callback, dataType: type }); } jQuery.extend({ put: function(url, data, callback, type) { return _ajax_request(url, data, callback, type, 'PUT'); }, delete_: function(url, data, callback, type) { return _ajax_request(url, data, callback, type, 'DELETE'); } }); jQuery.fn.submitWithAjax = function() { this.unbind('submit', false); this.submit(function() { $.post(this.action, $(this).serialize(), null, "script"); return false; }) return this; }; // Send data via get if <acronym title="JavaScript">JS</acronym> enabled jQuery.fn.getWithAjax = function() { this.unbind('click', false); this.click(function() { $.get($(this).attr("href"), $(this).serialize(), null, "script"); return false; }) return this; }; // Send data via Post if <acronym title="JavaScript">JS</acronym> enabled jQuery.fn.postWithAjax = function() { this.unbind('click', false); this.click(function() { $.post($(this).attr("href"), $(this).serialize(), null, "script"); return false; }) return this; }; jQuery.fn.putWithAjax = function() { this.unbind('click', false); this.click(function() { $.put($(this).attr("href"), $(this).serialize(), null, "script"); return false; }) return this; }; jQuery.fn.deleteWithAjax = function() { this.removeAttr('onclick'); this.unbind('click', false); this.click(function() { $.delete_($(this).attr("href"), $(this).serialize(), null, "script"); return false; }) return this; }; // This will "ajaxify" the links function ajaxLinks(){ $('.ajaxForm').submitWithAjax(); $('a.get').getWithAjax(); $('a.post').postWithAjax(); $('a.put').putWithAjax(); $('a.delete').deleteWithAjax(); } show.html.erb <%= link_to 'Link Title', article_path(a, :sentiment => Article::Sentiment['Neutral']), :class => 'put' %> The combination of the two things will call update.js.erb in rails, the code in that file is used as the callback of the ajax ($.put in this case) update.js.erb // user feedback $("#notice").html('<%= flash[:notice] %>'); // update the background color $(this OR e.target).attr("color", "red");

    Read the article

  • Can't get Jacobi algorithm to work in Objective-C

    - by Chris Long
    Hi, For some reason, I can't get this program to work. I've had other CS majors look at it and they can't figure it out either. This program performs the Jacobi algorithm (you can see step-by-step instructions and a MATLAB implementation here). BTW, it's different from the Wikipedia article of the same name. Since NSArray is one-dimensional, I added a method that makes it act like a two-dimensional C array. After running the Jacobi algorithm many times, the diagonal entries in the NSArray (i[0][0], i[1][1], etc.) are supposed to get bigger and the others approach 0. For some reason though, they all increase exponentially. For instance, i[2][4] should equal 0.0000009, not 9999999, while i[2][2] should be big. Thanks in advance, Chris NSArray+Matrix.m @implementation NSArray (Matrix) @dynamic offValue, transposed; - (double)offValue { double sum = 0.0; for ( MatrixItem *item in self ) if ( item.nonDiagonal ) sum += pow( item.value, 2.0 ); return sum; } - (NSMutableArray *)transposed { NSMutableArray *transpose = [[[NSMutableArray alloc] init] autorelease]; int i, j; for ( i = 0; i < 5; i++ ) { for ( j = 0; j < 5; j++ ) { [transpose addObject:[self objectAtRow:j andColumn:i]]; } } return transpose; } - (id)objectAtRow:(NSUInteger)row andColumn:(NSUInteger)column { NSUInteger index = 5 * row + column; return [self objectAtIndex:index]; } - (NSMutableArray *)multiplyWithMatrix:(NSArray *)array { NSMutableArray *result = [[NSMutableArray alloc] init]; int i = 0, j = 0, k = 0; double value; for ( i = 0; i < 5; i++ ) { value = 0.0; for ( j = 0; j < 5; j++ ) { for ( k = 0; k < 5; k++ ) { MatrixItem *firstItem = [self objectAtRow:i andColumn:k]; MatrixItem *secondItem = [array objectAtRow:k andColumn:j]; value += firstItem.value * secondItem.value; } MatrixItem *item = [[MatrixItem alloc] initWithValue:value]; item.row = i; item.column = j; [result addObject:item]; } } return result; } @end Jacobi_AlgorithmAppDelegate.m // ... - (void)jacobiAlgorithmWithEntry:(MatrixItem *)entry { MatrixItem *b11 = [matrix objectAtRow:entry.row andColumn:entry.row]; MatrixItem *b22 = [matrix objectAtRow:entry.column andColumn:entry.column]; double muPlus = ( b22.value + b11.value ) / 2.0; muPlus += sqrt( pow((b22.value - b11.value), 2.0) + 4.0 * pow(entry.value, 2.0) ); Vector *u1 = [[[Vector alloc] initWithX:(-1.0 * entry.value) andY:(b11.value - muPlus)] autorelease]; [u1 normalize]; Vector *u2 = [[[Vector alloc] initWithX:-u1.y andY:u1.x] autorelease]; NSMutableArray *g = [[[NSMutableArray alloc] init] autorelease]; for ( int i = 0; i <= 24; i++ ) { MatrixItem *item = [[[MatrixItem alloc] init] autorelease]; if ( i == 6*entry.row ) item.value = u1.x; else if ( i == 6*entry.column ) item.value = u2.y; else if ( i == ( 5*entry.row + entry.column ) || i == ( 5*entry.column + entry.row ) ) item.value = u1.y; else if ( i % 6 == 0 ) item.value = 1.0; else item.value = 0.0; [g addObject:item]; } NSMutableArray *firstResult = [[g.transposed multiplyWithMatrix:matrix] autorelease]; matrix = [firstResult multiplyWithMatrix:g]; } // ...

    Read the article

  • How can I return an object into PHP userspace from my extension?

    - by John Factorial
    I have a C++ object, Graph, which contains a property named cat of type Category. I'm exposing the Graph object to PHP in an extension I'm writing in C++. As long as the Graph's methods return primitives like boolean or long, I can use the Zend RETURN_*() macros (e.g. RETURN_TRUE(); or RETURN_LONG(123);. But how can I make Graph-getCategory(); return a Category object for the PHP code to manipulate? I'm following the tutorial over at http://devzone.zend.com/article/4486, and here's the Graph code I have so far: #include "php_getgraph.h" zend_object_handlers graph_object_handlers; struct graph_object { zend_object std; Graph *graph; }; zend_class_entry *graph_ce; #define PHP_CLASSNAME "WFGraph" ZEND_BEGIN_ARG_INFO_EX(php_graph_one_arg, 0, 0, 1) ZEND_END_ARG_INFO() ZEND_BEGIN_ARG_INFO_EX(php_graph_two_args, 0, 0, 2) ZEND_END_ARG_INFO() void graph_free_storage(void *object TSRMLS_DC) { graph_object *obj = (graph_object*)object; delete obj-graph; zend_hash_destroy(obj-std.properties); FREE_HASHTABLE(obj-std.properties); efree(obj); } zend_object_value graph_create_handler(zend_class_entry *type TSRMLS_DC) { zval *tmp; zend_object_value retval; graph_object *obj = (graph_object*)emalloc(sizeof(graph_object)); memset(obj, 0, sizeof(graph_object)); obj-std.ce = type; ALLOC_HASHTABLE(obj-std.properties); zend_hash_init(obj-std.properties, 0, NULL, ZVAL_PTR_DTOR, 0); zend_hash_copy(obj-std.properties, &type-default_properties, (copy_ctor_func_t)zval_add_ref, (void*)&tmp, sizeof(zval*)); retval.handle = zend_objects_store_put(obj, NULL, graph_free_storage, NULL TSRMLS_CC); retval.handlers = &graph_object_handlers; return retval; } PHP_METHOD(Graph, __construct) { char *perspectives; int perspectives_len; Graph *graph = NULL; zval *object = getThis(); if(zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "s", &perspectives, &perspectives_len) == FAILURE) { RETURN_NULL(); } graph = new Graph(perspectives); graph_object *obj = (graph_object*)zend_object_store_get_object(object TSRMLS_CC); obj-graph = graph; } PHP_METHOD(Graph, hasCategory) { long perspectiveId; Graph *graph; graph_object *obj = (graph_object*)zend_object_store_get_object(getThis() TSRMLS_CC); graph = obj-graph; if (graph == NULL) { RETURN_NULL(); } if(zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "l", &perspectiveId) == FAILURE) { RETURN_NULL(); } RETURN_BOOL(graph-hasCategory(perspectiveId)); } PHP_METHOD(Graph, getCategory) { // what to do here? RETURN_TRUE; } function_entry php_getgraph_functions[] = { PHP_ME(Graph,__construct,NULL,ZEND_ACC_PUBLIC|ZEND_ACC_CTOR) PHP_ME(Graph,hasCategory,php_graph_one_arg,ZEND_ACC_PUBLIC) PHP_ME(Graph,getCategory,php_graph_one_arg,ZEND_ACC_PUBLIC) { NULL, NULL, NULL } }; PHP_MINIT_FUNCTION(getgraph) { zend_class_entry ce; INIT_CLASS_ENTRY(ce, PHP_CLASSNAME, php_getgraph_functions); graph_ce = zend_register_internal_class(&ce TSRMLS_CC); graph_ce-create_object = graph_create_handler; memcpy(&graph_object_handlers, zend_get_std_object_handlers(), sizeof(zend_object_handlers)); graph_object_handlers.clone_obj = NULL; return SUCCESS; } zend_module_entry getgraph_module_entry = { #if ZEND_MODULE_API_NO = 20010901 STANDARD_MODULE_HEADER, #endif PHP_GETGRAPH_EXTNAME, NULL, /* Functions */ PHP_MINIT(getgraph), NULL, /* MSHUTDOWN */ NULL, /* RINIT */ NULL, /* RSHUTDOWN */ NULL, /* MINFO */ #if ZEND_MODULE_API_NO = 20010901 PHP_GETGRAPH_EXTVER, #endif STANDARD_MODULE_PROPERTIES }; #ifdef COMPILE_DL_GETGRAPH extern "C" { ZEND_GET_MODULE(getgraph) } #endif

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • Haskell Monad bind currying

    - by Chime
    I am currently in need of a bit of brain training and I found this article on Haskell and Monads I'm having trouble with exercise 7 re. Randomised function bind. To make the problem even simpler to experiment, I replaced the StdGen type with an unspecified type. So instead of... bind :: (a -> StdGen -> (b,StdGen)) -> (StdGen -> (a,StdGen)) -> (StdGen -> (b,StdGen)) I used... bind :: (a -> c -> (b,c)) -> (c -> (a,c)) -> (c -> (b,c)) and for the actual function impelemtation (just straight from the exercise) bind f x seed = let (x',seed') = x seed in f x' seed' and also 2 randomised functions to trial with: rndf1 :: (Num a, Num b) => a -> b -> (a,b) rndf1 a s = (a+1,s+1) rndf2 :: (Num a, Num b) => a -> b -> (a,b) rndf2 a s = (a+8,s+2) So with this in a Haskell compiler (ghci), I get... :t bind rndf2 bind rndf2 :: (Num a, Num c) => (c -> (a, c)) -> c -> (a, c) This matches the bind curried with rndf2 as the first parameter. But the thing I don't understand is how... :t bind rndf2 . rndf1 Suddenly gives bind rndf2 . rndf1 :: (Num a, Num c) => a -> c -> (a, c) This is the correct type of the composition that we are trying to produce because bind rndf2 . rndf1 Is a function that: takes the same parameter type(s) as rndf1 AND takes the return from rndf1 and pipes it as an input of rndf2 to return the same type as rndf2 rndf1 can take 2 parameters a -> c and rndf2 returns (a, c) so it matches that a composition of these function should have type: bind rndf2 . rndf1 :: (Num a, Num c) => a -> c -> (a, c) This does not match the naive type that I initially came up with for bind bind f :: (a -> b -> (c, d)) -> (c, d) -> (e, f) Here bind mythically takes a function that takes two parameters and produces a function that takes a tuple in order that the output from rndf1 can be fed into rndf2 why the bind function needs to be coded as it is Why the bind function does not have the naive type

    Read the article

  • Generating strongly biased radom numbers for tests

    - by nobody
    I want to run tests with randomized inputs and need to generate 'sensible' random numbers, that is, numbers that match good enough to pass the tested function's preconditions, but hopefully wreak havoc deeper inside its code. math.random() (I'm using Lua) produces uniformly distributed random numbers. Scaling these up will give far more big numbers than small numbers, and there will be very few integers. I would like to skew the random numbers (or generate new ones using the old function as a randomness source) in a way that strongly favors 'simple' numbers, but will still cover the whole range, I.e. extending up to positive/negative infinity (or ±1e309 for double). This means: numbers up to, say, ten should be most common, integers should be more common than fractions, numbers ending in 0.5 should be the most common fractions, followed by 0.25 and 0.75; then 0.125, and so on. A different description: Fix a base probability x such that probabilities will sum to one and define the probability of a number n as xk where k is the generation in which n is constructed as a surreal number1. That assigns x to 0, x2 to -1 and +1, x3 to -2, -1/2, +1/2 and +2, and so on. This gives a nice description of something close to what I want (it skews a bit too much), but is near-unusable for computing random numbers. The resulting distribution is nowhere continuous (it's fractal!), I'm not sure how to determine the base probability x (I think for infinite precision it would be zero), and computing numbers based on this by iteration is awfully slow (spending near-infinite time to construct large numbers). Does anyone know of a simple approximation that, given a uniformly distributed randomness source, produces random numbers very roughly distributed as described above? I would like to run thousands of randomized tests, quantity/speed is more important than quality. Still, better numbers mean less inputs get rejected. Lua has a JIT, so performance can't be reasonably predicted. Jumps based on randomness will break every prediction, and many calls to math.random() will be slow, too. This means a closed formula will be better than an iterative or recursive one. 1 Wikipedia has an article on surreal numbers, with a nice picture. A surreal number is a pair of two surreal numbers, i.e. x := {n|m}, and its value is the number in the middle of the pair, i.e. (for finite numbers) {n|m} = (n+m)/2 (as rational). If one side of the pair is empty, that's interpreted as increment (or decrement, if right is empty) by one. If both sides are empty, that's zero. Initially, there are no numbers, so the only number one can build is 0 := { | }. In generation two one can build numbers {0| } =: 1 and { |0} =: -1, in three we get {1| } =: 2, {|1} =: -2, {0|1} =: 1/2 and {-1|0} =: -1/2 (plus some more complex representations of known numbers, e.g. {-1|1} ? 0). Note that e.g. 1/3 is never generated by finite numbers because it is an infinite fraction – the same goes for floats, 1/3 is never represented exactly.

    Read the article

  • How to Upgrade Your Netbook to Windows 7 Home Premium

    - by Matthew Guay
    Would you like more features and flash in Windows on your netbook?  Here’s how you can easily upgrade your netbook to Windows 7 Home Premium the easy way. Most new netbooks today ship with Windows 7 Starter, which is the cheapest edition of Windows 7.  It is fine for many computing tasks, and will run all your favorite programs great, but it lacks many customization, multimedia, and business features found in higher editions.  Here we’ll show you how you can quickly upgrade your netbook to more full-featured edition of Windows 7 using Windows Anytime Upgrade.  Also, if you want to upgrade your laptop or desktop to another edition of Windows 7, say Professional, you can follow these same steps to upgrade it, too. Please note: This is only for computers already running Windows 7.  If your netbook is running XP or Vista, you will have to run a traditional upgrade to install Windows 7. Upgrade Advisor First, let’s make sure your netbook can support the extra features, such as Aero Glass, in Windows 7 Home Premium.  Most modern netbooks that ship with Windows 7 Starter can run the advanced features in Windows 7 Home Premium, but let’s check just in case.  Download the Windows 7 Upgrade Advisor (link below), and install as normal. Once it’s installed, run it and click Start Check.   Make sure you’re connected to the internet before you run the check, or otherwise you may see this error message.  If you see it, click Ok and then connect to the internet and start the check again. It will now scan all of your programs and hardware to make sure they’re compatible with Windows 7.  Since you’re already running Windows 7 Starter, it will also tell you if your computer will support the features in other editions of Windows 7. After a few moments, the Upgrade Advisor will show you want it found.  Here we see that our netbook, a Samsung N150, can be upgraded to Windows 7 Home Premium, Professional, or Ultimate. We also see that we had one issue, but this was because a driver we had installed was not recognized.  Click “See all system requirements” to see what your netbook can do with the new edition. This shows you which of the requirements, including support for Windows Aero, your netbook meets.  Here our netbook supports Aero, so we’re ready to go upgrade. For more, check out our article on how to make sure your computer can run Windows 7 with Upgrade Advisor. Upgrade with Anytime Upgrade Now, we’re ready to upgrade our netbook to Windows 7 Home Premium.  Enter “Anytime Upgrade” in the Start menu search,and select Windows Anytime Upgrade. Windows Anytime Upgrade lets you upgrade using product key you already have or one you purchase during the upgrade process.  And, it installs without any downloads or Windows disks, so it works great even for netbooks without DVD drives. Anytime Upgrades are cheaper than a standard upgrade, and for a limited time, select retailers in the US are offering Anytime Upgrades to Windows 7 Home Premium for only $49.99 if purchased with a new netbook.  If you already have a netbook running Windows 7 Starter, you can either purchase an Anytime Upgrade package at a retail store or purchase a key online during the upgrade process for $79.95.  Or, if you have a standard Windows 7 product key (full or upgrade), you can use it in Anytime upgrade.  This is especially nice if you can purchase Windows 7 cheaper through your school, university, or office. Purchase an upgrade online To purchase an upgrade online, click “Go online to choose the edition of Windows 7 that’s best for you”.   Here you can see a comparison of the features of each edition of Windows 7.  Note that you can upgrade to either Home Premium, Professional, or Ultimate.  We chose home Premium because it has most of the features that home users want, including Media Center and Aero Glass effects.  Also note that the price of each upgrade is cheaper than the respective upgrade from Windows XP or Vista.  Click buy under the edition you want.   Enter your billing information, then your payment information.  Once you confirm your purchase, you will directly be taken to the Upgrade screen.  Make sure to save your receipt, as you will need the product key if you ever need to reinstall Windows on your computer. Upgrade with an existing product key If you purchased an Anytime Upgrade kit from a retailer, or already have a Full or Upgrade key for another edition of Windows 7, choose “Enter an upgrade key”. Enter your product key, and click Next.  If you purchased an Anytime Upgrade kit, the product key will be located on the inside of the case on a yellow sticker. The key will be verified as a valid key, and Anytime Upgrade will automatically choose the correct edition of Windows 7 based on your product key.  Click Next when this is finished. Continuing the Upgrade process Whether you entered a key or purchased a key online, the process is the same from here on.  Click “I accept” to accept the license agreement. Now, you’re ready to install your upgrade.  Make sure to save all open files and close any programs, and then click Upgrade. The upgrade only takes about 10 minutes in our experience but your mileage may vary.  Any available Microsoft updates, including ones for Office, Security Essentials, and other products, will be installed before the upgrade takes place. After a couple minutes, your computer will automatically reboot and finish the installation.  It will then reboot once more, and your computer will be ready to use!  Welcome to your new edition of Windows 7! Here’s a before and after shot of our desktop.  When you do an Anytime Upgrade, all of your programs, files, and settings will be just as they were before you upgraded.  The only change we noticed was that our pinned taskbar icons were slightly rearranged to the default order of Internet Explorer, Explorer, and Media Player.  Here’s a shot of our desktop before the upgrade.  Notice that all of our pinned programs and desktop icons are still there, as well as our taskbar customization (we are using small icons on the taskbar instead of the default large icons). Before, with the Windows 7 Starter background and the Aero Basic theme: And after, with Aero Glass and the more colorful default Windows 7 background.   All of the features of Windows 7 Home Premium are now ready to use.  The Aero theme was activate by default, but you can now customize your netbook theme, background, and more with the Personalization pane.  To open it, right-click on your desktop and select Personalize. You can also now use Windows Media Center, and can play-back DVD movies using an external drive. One of our favorite tools, the Snipping Tool, is also now available for easy screenshots and clips. Activating you new edition of Windows 7 You will still need to activate your new edition of Windows 7.  To do this right away, open the start menu, right-click on Computer, and select Properties.   Scroll to the bottom, and click “Activate Windows Now”. Make sure you’re connected to the internet, and then select “Activate Windows online now”. Activation may take a few minutes, depending on your internet connection speed. When it is done, the Activation wizard will let you know that Windows is activated and genuine.  Your upgrade is all finished! Conclusion Windows Anytime Upgrade makes it easy, and somewhat cheaper, to upgrade to another edition of Windows 7.  It’s useful for desktop and laptop owners who want to upgrade to Professional or Ultimate, but many more netbook owners will want to upgrade from Starter to Home Premium or another edition.  Links Download the Windows 7 Upgrade Advisor Windows Team Blog: Anytime Upgrade Special with new PC purchase Similar Articles Productive Geek Tips How To Upgrade from Vista to Windows 7 Home Premium EditionAnother Blog You Should Subscribe ToMysticgeek Blog: Turn Vista Home Premium Into Ultimate (Part 3) – Shadow CopyUpgrade Ubuntu from Breezy to DapperHow to Upgrade the Windows 7 RC to RTM (Final Release) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering a list using ModelMetadata

    - by rajbk
    This post looks at how to control paging, sorting and filtering when displaying a list of data by specifying attributes in your Model using the ASP.NET MVC framework and the excellent MVCContrib library. It also shows how to hide/show columns and control the formatting of data using attributes.  This uses the Northwind database. A sample project is attached at the end of this post. Let’s start by looking at a class called ProductViewModel. The properties in the class are decorated with attributes. The OrderBy attribute tells the system that the Model can be sorted using that property. The SearchFilter attribute tells the system that filtering is allowed on that property. Filtering type is set by the  FilterType enum which currently supports Equals and Contains. The ScaffoldColumn property specifies if a column is hidden or not The DisplayFormat specifies how the data is formatted. public class ProductViewModel { [OrderBy(IsDefault = true)] [ScaffoldColumn(false)] public int? ProductID { get; set; }   [SearchFilter(FilterType.Contains)] [OrderBy] [DisplayName("Product Name")] public string ProductName { get; set; }   [OrderBy] [DisplayName("Unit Price")] [DisplayFormat(DataFormatString = "{0:c}")] public System.Nullable<decimal> UnitPrice { get; set; }   [DisplayName("Category Name")] public string CategoryName { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? CategoryID { get; set; }   [SearchFilter] [ScaffoldColumn(false)] public int? SupplierID { get; set; }   [OrderBy] public bool Discontinued { get; set; } } Before we explore the code further, lets look at the UI.  The UI has a section for filtering the data. The column headers with links are sortable. Paging is also supported with the help of a pager row. The pager is rendered using the MVCContrib Pager component. The data is displayed using a customized version of the MVCContrib Grid component. The customization was done in order for the Grid to be aware of the attributes mentioned above. Now, let’s look at what happens when we perform actions on this page. The diagram below shows the process: The form on the page has its method set to “GET” therefore we see all the parameters in the query string. The query string is shown in blue above. This query gets routed to an action called Index with parameters of type ProductViewModel and PageSortOptions. The parameters in the query string get mapped to the input parameters using model binding. The ProductView object created has the information needed to filter data while the PageAndSorting object is used for paging and sorting the data. The last block in the figure above shows how the filtered and paged list is created. We receive a product list from our product repository (which is of type IQueryable) and first filter it by calliing the AsFiltered extension method passing in the productFilters object and then call the AsPagination extension method passing in the pageSort object. The AsFiltered extension method looks at the type of the filter instance passed in. It skips properties in the instance that do not have the SearchFilter attribute. For properties that have the SearchFilter attribute, it adds filter expression trees to filter against the IQueryable data. The AsPagination extension method looks at the type of the IQueryable and ensures that the column being sorted on has the OrderBy attribute. If it does not find one, it looks for the default sort field [OrderBy(IsDefault = true)]. It is required that at least one attribute in your model has the [OrderBy(IsDefault = true)]. This because a person could be performing paging without specifying an order by column. As you may recall the LINQ Skip method now requires that you call an OrderBy method before it. Therefore we need a default order by column to perform paging. The extension method adds a order expressoin tree to the IQueryable and calls the MVCContrib AsPagination extension method to page the data. Implementation Notes Auto Postback The search filter region auto performs a get request anytime the dropdown selection is changed. This is implemented using the following jQuery snippet $(document).ready(function () { $("#productSearch").change(function () { this.submit(); }); }); Strongly Typed View The code used in the Action method is shown below: public ActionResult Index(ProductViewModel productFilters, PageSortOptions pageSortOptions) { var productPagedList = productRepository.GetProductsProjected().AsFiltered(productFilters).AsPagination(pageSortOptions);   var productViewFilterContainer = new ProductViewFilterContainer(); productViewFilterContainer.Fill(productFilters.CategoryID, productFilters.SupplierID, productFilters.ProductName);   var gridSortOptions = new GridSortOptions { Column = pageSortOptions.Column, Direction = pageSortOptions.Direction };   var productListContainer = new ProductListContainerModel { ProductPagedList = productPagedList, ProductViewFilterContainer = productViewFilterContainer, GridSortOptions = gridSortOptions };   return View(productListContainer); } As you see above, the object that is returned to the view is of type ProductListContainerModel. This contains all the information need for the view to render the Search filter section (including dropdowns),  the Html.Pager (MVCContrib) and the Html.Grid (from MVCContrib). It also stores the state of the search filters so that they can recreate themselves when the page reloads (Viewstate, I miss you! :0)  The class diagram for the container class is shown below.   Custom MVCContrib Grid The MVCContrib grid default behavior was overridden so that it would auto generate the columns and format the columns based on the metadata and also make it aware of our custom attributes (see MetaDataGridModel in the sample code). The Grid ensures that the ShowForDisplay on the column is set to true This can also be set by the ScaffoldColumn attribute ref: http://bradwilson.typepad.com/blog/2009/10/aspnet-mvc-2-templates-part-2-modelmetadata.html) Column headers are set using the DisplayName attribute Column sorting is set using the OrderBy attribute. The data is formatted using the DisplayFormat attribute. Generic Extension methods for Sorting and Filtering The extension method AsFiltered takes in an IQueryable<T> and uses expression trees to query against the IQueryable data. The query is constructed using the Model metadata and the properties of the T filter (productFilters in our case). Properties in the Model that do not have the SearchFilter attribute are skipped when creating the filter expression tree.  It returns an IQueryable<T>. The extension method AsPagination takes in an IQuerable<T> and first ensures that the column being sorted on has the OrderBy attribute. If not, we look for the default OrderBy column ([OrderBy(IsDefault = true)]). We then build an expression tree to sort on this column. We finally hand off the call to the MVCContrib AsPagination which returns an IPagination<T>. This type as you can see in the class diagram above is passed to the view and used by the MVCContrib Grid and Pager components. Custom Provider To get the system to recognize our custom attributes, we create our MetadataProvider as mentioned in this article (http://bradwilson.typepad.com/blog/2010/01/why-you-dont-need-modelmetadataattributes.html) protected override ModelMetadata CreateMetadata(IEnumerable<Attribute> attributes, Type containerType, Func<object> modelAccessor, Type modelType, string propertyName) { ModelMetadata metadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);   SearchFilterAttribute searchFilterAttribute = attributes.OfType<SearchFilterAttribute>().FirstOrDefault(); if (searchFilterAttribute != null) { metadata.AdditionalValues.Add(Globals.SearchFilterAttributeKey, searchFilterAttribute); }   OrderByAttribute orderByAttribute = attributes.OfType<OrderByAttribute>().FirstOrDefault(); if (orderByAttribute != null) { metadata.AdditionalValues.Add(Globals.OrderByAttributeKey, orderByAttribute); }   return metadata; } We register our MetadataProvider in Global.asax.cs. protected void Application_Start() { AreaRegistration.RegisterAllAreas();   RegisterRoutes(RouteTable.Routes);   ModelMetadataProviders.Current = new MvcFlan.QueryModelMetaDataProvider(); } Bugs, Comments and Suggestions are welcome! You can download the sample code below. This code is purely experimental. Use at your own risk. Download Sample Code (VS 2010 RTM) MVCNorthwindSales.zip

    Read the article

  • Run Windows in Ubuntu with VMware Player

    - by Matthew Guay
    Are you an enthusiast who loves their Ubuntu Linux experience but still needs to use Windows programs?  Here’s how you can get the full Windows experience on Ubuntu with the free VMware Player. Linux has become increasingly consumer friendly, but still, the wide majority of commercial software is only available for Windows and Macs.  Dual-booting between Windows and Linux has been a popular option for years, but this is a frustrating solution since you have to reboot into the other operating system each time you want to run a specific application.  With virtualization, you’ll never have to make this tradeoff.  VMware Player makes it quick and easy to install any edition of Windows in a virtual machine.  With VMware’s great integration tools, you can copy and paste between your Linux and Windows programs and even run native Windows applications side-by-side with Linux ones. Getting Started Download the latest version of VMware Player for Linux, and select either the 32-bit or 64-bit version, depending on your system.  VMware Player is a free download, but requires registration.  Sign in with your VMware account, or create a new one if you don’t already have one. VMware Player is fairly easy to install on Linux, but you will need to start out the installation from the terminal.  First, enter the following to make sure the installer is marked as executable, substituting version/build_number for the version number on the end of the file you downloaded. chmod +x ./VMware-Player-version/build_number.bundle Then, enter the following to start the install, again substituting your version number: gksudo bash ./VMware-Player-version/build_number.bundle You may have to enter your administrator password to start the installation, and then the VMware Player graphical installer will open.  Choose whether you want to check for product updates and submit usage data to VMware, and then proceed with the install as normal. VMware Player installed in only a few minutes in our tests, and was immediately ready to run, no reboot required.  You can now launch it from your Ubuntu menu: click Applications \ System Tools \ VMware Player. You’ll need to accept the license agreement the first time you run it. Welcome to VMware Player!  Now you can create new virtual machines and run pre-built ones on your Ubuntu desktop. Install Windows in VMware Player on Ubuntu Now that you’ve got VMware setup, it’s time to put it to work.  Click the Create a New Virtual Machine as above to start making a Windows virtual machine. In the dialog that opens, select your installer disk or ISO image file that you want to install Windows from.  In this example, we’re select a Windows 7 ISO.  VMware will automatically detect the operating system on the disk or image.  Click Next to continue. Enter your Windows product key, select the edition of Windows to install, and enter your name and password. You can leave the product key field blank and enter it later.  VMware will ask if you want to continue without a product key, so just click Yes to continue. Now enter a name for your virtual machine and select where you want to save it.  Note: This will take up at least 15Gb of space on your hard drive during the install, so make sure to save it on a drive with sufficient storage space. You can choose how large you want your virtual hard drive to be; the default is 40Gb, but you can choose a different size if you wish.  The entire amount will not be used up on your hard drive initially, but the virtual drive will increase in size up to your maximum as you add files.  Additionally, you can choose if you want the virtual disk stored as a single file or as multiple files.  You will see the best performance by keeping the virtual disk as one file, but the virtual machine will be more portable if it is broken into smaller files, so choose the option that will work best for your needs. Finally, review your settings, and if everything looks good, click Finish to create the virtual machine. VMware will take over now, and install Windows without any further input using its Easy Install.  This is one of VMware’s best features, and is the main reason we find it the easiest desktop virtualization solution to use.   Installing VMware Tools VMware Player doesn’t include the VMware Tools by default; instead, it automatically downloads them for the operating system you’re installing.  Once you’ve downloaded them, it will use those tools anytime you install that OS.  If this is your first Windows virtual machine to install, you may be prompted to download and install them while Windows is installing.  Click Download and Install so your Easy Install will finish successfully. VMware will then download and install the tools.  You may need to enter your administrative password to complete the install. Other than this, you can leave your Windows install unattended; VMware will get everything installed and running on its own. Our test setup took about 30 minutes, and when it was done we were greeted with the Windows desktop ready to use, complete with drivers and the VMware tools.  The only thing missing was the Aero glass feature.  VMware Player is supposed to support the Aero glass effects in virtual machines, and although this works every time when we use VMware Player on Windows, we could not get it to work in Linux.  Other than that, Windows is fully ready to use.  You can copy and paste text, images, or files between Ubuntu and Windows, or simply drag-and-drop files between the two. Unity Mode Using Windows in a window is awkward, and makes your Windows programs feel out of place and hard to use.  This is where Unity mode comes in.  Click Virtual Machine in VMware’s menu, and select Enter Unity. Your Windows desktop will now disappear, and you’ll see a new Windows menu underneath your Ubuntu menu.  This works the same as your Windows Start Menu, and you can open your Windows applications and files directly from it. By default, programs from Windows will have a colored border and a VMware badge in the corner.  You can turn this off from the VMware settings pane.  Click Virtual Machine in VMware’s menu and select Virtual Machine Settings.  Select Unity under the Options tab, and uncheck the Show borders and Show badges boxes if you don’t want them. Unity makes your Windows programs feel at home in Ubuntu.  Here we have Word 2010 and IE8 open beside the Ubuntu Help application.  Notice that the Windows applications show up in the taskbar on the bottom just like the Linux programs.  If you’re using the Compiz graphics effects in Ubuntu, your Windows programs will use them too, including the popular wobbly windows effect. You can switch back to running Windows inside VMware Player’s window by clicking the Exit Unity button in the VMware window. Now, whenever you want to run Windows applications in Linux, you can quickly launch it from VMware Player. Conclusion VMware Player is a great way to run Windows on your Linux computer.  It makes it extremely easy to get Windows installed and running, lets you run your Windows programs seamlessly alongside your Linux ones.  VMware products work great in our experience, and VMware Player on Linux was no exception. If you’re a Windows user and you’d like to run Ubuntu on Windows, check out our article on how to Run Ubuntu in Windows with VMware Player. Link Download VMware Player 3 (Registration required) Download Windows 7 Enterprise 90-day trial Similar Articles Productive Geek Tips Enable Copy and Paste from Ubuntu VMware GuestInstall VMware Tools on Ubuntu Edgy EftRestart the Ubuntu Gnome User Interface QuicklyHow to Add a Program to the Ubuntu Startup List (After Login)How To Run Ubuntu in Windows 7 with VMware Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >