Search Results

Search found 4313 results on 173 pages for 'mod rewrite'.

Page 163/173 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Does my fat-client application belong in the MVC pattern?

    - by boatingcow
    The web-based application I’m currently working on is growing arms and legs! It’s basically an administration system which helps users to keep track of bookings, user accounts, invoicing etc. It can also be accessed via a couple of different websites using a fairly crude API. The fat-client design loosely follows the MVC pattern (or perhaps MVP) with a php/MySQL backend, Front Controller, several dissimilar Page Controllers, a liberal smattering of object-oriented and procedural Models, a confusing bunch of Views and templates, some JavaScripts, CSS files and Flash objects. The programmer in me is a big fan of the principle of “Separation of Concerns” and on that note, I’m currently trying to figure out the best way to separate and combine the various concerns as the project grows and more people contribute to it. The problem we’re facing is that although JavaScript (or Flash with ActionScript) is normally written with the template, hence part of the View and decoupled from the Controller and Model, we find that it actually encompasses the entire MVC pattern... Swap an image with an onmouseover event - that’s Behaviour. Render a datagrid - we’re manipulating the View. Send the result of reordering a list via AJAX - now we’re in Control. Check a form field to see if an email address is in a valid format - we’re consulting the Model. Is it wise to let the database people write up the validation Model with jQuery? Can the php programmers write the necessary Control structures in JavaScript? Can the web designers really write a functional AJAX form for their View? Should there be a JavaScript overlord for every project? If the MVC pattern could be applied to the people instead of the code, we would end up with this: Model - the database boffins - “SELECT * FROM mind WHERE interested IS NULL” Control - pesky programmers - “class Something extends NothingAbstractClass{…}” View - traditionally the domain of the graphic/web designer - “” …and a new layer: Behaviour - interaction and feedback designer - “CSS3 is the new black…” So, we’re refactoring and I’d like to stick to best practice design, but I’m not sure how to proceed. I don’t want to reinvent the wheel, so would anyone have any hints or tips as to what pattern I should be looking at or any code samples from someone who’s already done the dirty work? As the programmer guy, how can I rewrite the app for backend and front end whilst keeping the two separate? And before you ask, yes I’ve looked at Zend, CodeIgnitor, Symfony, etc., and no, they don’t seem to cross the boundary between server logic and client logic!

    Read the article

  • Java JMS Messaging

    - by London
    Hello, I have a working example of sending message to server and server receiving it via qpid messaging. Here is simple hello world to send to server : http://pastebin.com/M7mSECJn And here is server which receives requests and sends response(the current client doesn't receive response) : http://pastebin.com/2mEeuzrV Here is my property file : http://pastebin.com/TLEFdpXG They all work perfectly, I can see the messages in the qpid queue via Qpid JMX management console. These examples are downloaded from https://svn.apache.org/repos/asf/qpid/trunk/qpid/java/client/example (someone may need it also). I've done Jboss messaging using spring before, but I can't manage to do the same with qpid. With jboss inside applicationsContext I had beans jndiTemplate, conectionFactory, destinationQueue, and jmscontainer like this : <!-- Queue configuration --> <bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate"> <property name="environment"> <props> <prop key="java.naming.factory.initial">org.jnp.interfaces.NamingContextFactory</prop> <prop key="java.naming.provider.url">jnp://localhost:1099</prop> <prop key="java.naming.factory.url.pkgs">org.jboss.naming:org.jnp.interfaces</prop> <prop key="java.naming.security.principal">admin</prop> <prop key="java.naming.security.credentials">admin</prop> </props> </property> </bean> <bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiTemplate" ref="jndiTemplate" /> <property name="jndiName" value="ConnectionFactory" /> </bean> <bean id="queueDestination" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiTemplate" ref="jndiTemplate" /> <property name="jndiName"> <value>queue/testQueue</value> </property> </bean> <bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> <property name="destination" ref="queueDestination" /> <property name="messageListener" ref="listener" /> </bean> and of course sender and listener : Now I'd like to rewrite this qpid example using spring context logic. Can anyone help me?

    Read the article

  • Tearing my hair out - ASP.Net AJAX AutoComplete not working

    - by Dave
    Hope someone can help with this. I've been up and down the web and through this site looking for an answer, but still can't get the Autocomplete AJAX control to work. I've gone from trying to include it in an existing site to stripping it right back to a very basic form and it's still not functioning. I'm having a little more luck using Page Methods rather than a local webservice, so here is my code <%@ Page Language="C#" AutoEventWireup="true" CodeFile="droptest.aspx.cs" Inherits="droptest" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:ScriptManager ID="ScriptManager1" EnablePageMethods="true" runat="server"> </asp:ScriptManager> <cc1:AutoCompleteExtender ID="AutoCompleteExtender1" runat="server" MinimumPrefixLength="1" ServiceMethod="getResults" TargetControlID="TextBox1"> </cc1:AutoCompleteExtender> </form> </body> </html> using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.Script.Services; using System.Web.Services; public partial class droptest : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } [WebMethod] public string[] getResults(string prefixText, int count) { string[] test = new string[5] { "One", "Two", "Three", "Four", "Five" }; return test; } } Tried to keep things as simple as possible, but all I get is either the autocomplete dropdown with the source of the page (starting with the <! doctype...) letter by letter, or in IE7 it just says "UNDEFINED" all the way down the list. I'm using Visual Web Developer 2008 at the moment, this is running on Localhost. I think I've exhausted all the "Try this..." options I can find, everything from adding in [ScriptMethod] to changing things in Web.Config. Is there anything obviously wrong with this code? Only other thing that may be having an effect is in Global.asax I do a Context.RewritePath to rewrite URLs - does this have any effect on AJAX? Thanks for any help you can give.

    Read the article

  • Problem with jQuery selector and MasterPage

    - by Daemon
    Hi, I have a problem with a master page containing a asp:textbox that I'm trying to access using jQuery. I have read lot sof thread regarding this and tried all the different approaches I have seen, but regardless, the end result end up as Undefined. This is the relevant part of the MasterPage code: <p><asp:Label ID="Label1" AssociatedControlID="osxinputfrom" runat="server">Navn</asp:Label><asp:TextBox CssClass="osxinputform" ID="osxinputfrom" runat="server"></asp:TextBox></p> When I click the button, the following code from a jQuery .js file is run: show: function(d) { $('#osx-modal-content .osxsubmitbutton').click(function (e) { e.preventDefault(); if (OSX.validate()){ $('#osx-modal-data').fadeOut(200); d.container.animate( {height:80}, 500, function () { $('#osx-modal-data').html("<h2>Sender...</h2>").fadeIn(250, function () { $.ajax({ type: "POST", url: "Default.aspx/GetDate", data: "{'from':'" + $("#osxinputfrom").val() + "','mailaddress':'" + $("#osxinputmail").val() + "','header':'Test3','message':'Test4'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { $('#osx-modal-data').fadeOut(200, function () { $('#osx-modal-data').html('<h2>Meldingen er sendt!</h2>'); $('#osx-modal-data').fadeIn(200); }); }, error: function(msg){ $('#osx-modal-data').fadeOut(200, function () { $('#osx-modal-data').html('<h2>Feil oppstod ved sending av melding!</h2>'); $('#osx-modal-data').fadeIn(200); }); } }); }); } ); } else{ $('#osxinputstatus').fadeOut(250, function () { $('#osxinputstatus').html('<p id="osxinputstatus">' + OSX.message + '</a>'); $('#osxinputstatus').fadeIn(250); }); } }); }, So the problem here is that $("#osxinputfrom").val() evaluated to Undefined. I understand that the masterpage will add some prefix to the ID, so I tried using the ID from the page when it's run that ends up as ct100_osxinputfrom, and I also tried some other hinds that I found while searching like $("#<%=osxinputfrom.ClientID%"), but it ends up as Undefined in the method that is called from the jQuery ajax method anyway. The third and fourth parameters to the ajay function that is hardcoded as Test3 and Test4 comes fine in the C# backend method. So my question is simply: How can I rewrite the jQuery selector to fetch the correct value from the textbox? (before I used master pages it worked fine by the way) Best regards Daemon

    Read the article

  • J: Self-reference in bubble sort tacit implementation

    - by Yasir Arsanukaev
    Hello people! Since I'm beginner in J I've decided to solve a simple task using this language, in particular implementing the bubblesort algorithm. I know it's not idiomatically to solve such kind of problem in functional languages, because it's naturally solved using array element transposition in imperative languages like C, rather than constructing modified list in declarative languages. However this is the code I've written: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # Let's apply it to an array: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # 5 3 8 7 2 2 3 5 7 8 The thing that confuses me is $: referring to the statement within the outermost parentheses. Help says that: $: denotes the longest verb that contains it. The other book (~ 300 KiB) says: 3+4 7 5*20 100 Symbols like + and * for plus and times in the above phrases are called verbs and represent functions. You may have more than one verb in a J phrase, in which case it is constructed like a sentence in simple English by reading from left to right, that is 4+6%2 means 4 added to whatever follows, namely 6 divided by 2. Let's rewrite my code snippet omitting outermost ()s: ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # 5 3 8 7 2 2 3 5 7 8 Reuslts are the same. I couldn't explain myself why this works, why only ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is treated as the longest verb for $: but not the whole expression ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # and not just (<./@(2&{.)), $:@((>./@(2&{.)),2&}.), because if ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is a verb, it should also form another verb after conjunction with #, i. e. one might treat the whole sentence (first snippet) as a verb. Probably there's some limit for the verb length limited by one conjunction. Look at the following code (from here): factorial =: (* factorial@<:) ^: (1&<) factorial 4 24 factorial within expression refers to the whole function, i. e. (* factorial@<:) ^: (1&<). Following this example I've used a function name instead of $:: bubblesort =: (((<./@(2&{.)), bubblesort@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # bubblesort 5 3 8 7 2 2 3 5 7 8 I expected bubblesort to refer to the whole function, but it doesn't seem true for me since the result is correct. Also I'd like to see other implementations if you have ones, even slightly refactored. Thanks.

    Read the article

  • CSS selectors : should I make my CSS easier to read or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It make my .css file harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Serializing Class Derived from Generic Collection yet Deserializing the Generic Collection

    - by Stacey
    I have a Repository Class with the following method... public T Single<T>(Predicate<T> expression) { using (var list = (Models.Collectable<T>)System.Xml.Serializer.Deserialize(typeof(Models.Collectable<T>), FileName)) { return list.Find(expression); } } Where Collectable is defined.. [Serializable] public class Collectable<T> : List<T>, IDisposable { public Collectable() { } public void Dispose() { } } And an Item that uses it is defined.. [Serializable] [System.Xml.Serialization.XmlRoot("Titles")] public partial class Titles : Collectable<Title> { } The problem is when I call the method, it expects "Collectable" to be the XmlRoot, but the XmlRoot is "Titles" (all of object Title). I have several classes that are collected in .xml files like this, but it seems pointless to rewrite the basic methods for loading each up when the generic accessors do it - but how can I enforce the proper root name for each file without hard coding methods for each one? The [System.Xml.Serialization.XmlRoot] seems to be ignored. When called like this... var titles = Repository.List<Models.Title>(); I get the exception <Titlesxmlns=''> was not expected. The XML is formatted such as. .. <?xml version="1.0" encoding="utf-16"?> <Titles xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Title> <Id>442daf7d-193c-4da8-be0b-417cec9dc1c5</Id> </Title> </Titles> Here is the deserialization code. public static T Deserialize<T>(String xmlString) { System.Xml.Serialization.XmlSerializer XmlFormatSerializer = new System.Xml.Serialization.XmlSerializer(typeof(T)); StreamReader XmlStringReader = new StreamReader(xmlString); //XmlTextReader XmlFormatReader = new XmlTextReader(XmlStringReader); try { return (T)XmlFormatSerializer.Deserialize(XmlStringReader); } catch (Exception e) { throw e; } finally { XmlStringReader.Close(); } }

    Read the article

  • Which mobile operating system should I code for?

    - by samgoody
    It seems as though mobile computing has fully arrived. I would like to rewrite two of our programs for mobile devices, but am a bit lost as to which platform to target. Complicating this decision: I would need to learn the relevant languages and IDEs - my coding to date has been almost all web based (PHP, JS, Actionscript, etc. Some ASPX). Most users seem to be religious about their mobile decision, so oral conversations leave me more confused then enlightened. I do not yet own a smartphone - will have to buy one once I know which platform to be aiming for. Both of my programs are more for business users, (one is only useful for C.P.A.s). I am a single developer, and cannot develop for more than one platform at a time. Getting it right is important. Based on what I've found on the web, I would've expected RIM to be a shoo-in, and the general order to be as follows: RIM Blackberry - More of them than any other brand. Despite naysayers, they've had double the sales (or perhaps 5X the sales) of any other smartphone, and have continued to grow. And, they have business users. Android - According to Schmidt, they have outsold everyone else except RIM (though I can't find where I read that now), and they are just getting started. According to Comscore, they are already at 8% of the market and expected to hit Shcmidt's claims within six months. Nokia - The largest worldwide. If they would just make up between Maemo or Symbian, I would be far less confused. iPhone - Much more competition by other apps, fewer sales to be had, and a overlord that can delay or cancel my app at any time. Is Cocoa hard to learn? Windows Mobile - Word is that version 7 will not be backwards compatible and losing market share. Palm WebOS - Perhaps this should go first, as it is the only one that offers tools to make my life easy as a web application developer. No competition in marketplace. But not very many users either. However, a search on StackOverflow shows a hugely disproportionate number of iPhone questions versus Blackberry. Likewise, there are clearly more apps on iPhone, so it must be getting developer love. What is the one platform I should develop for? Please back up your answer with the logic.

    Read the article

  • How to write this Linq SQL as a Dynamic Query (using strings)?

    - by Dr. Zim
    Skip to the "specific question" as needed. Some background: The scenario: I have a set of products with a "drill down" filter (Query Object) populated with DDLs. Each progressive DDL selection will further limit the product list as well as what options are left for the DDLs. For example, selecting a hammer out of tools limits the Product Sizes to only show hammer sizes. Current setup: I created a query object, sent it to a repository, and fed each option to a SQL "table valued function" where null values represent "get all products". I consider this a good effort, but far from DDD acceptable. I want to avoid any "programming" in SQL, hopefully doing everything with a repository. Comments on this topic would be appreciated. Specific question: How would I rewrite this query as a Dynamic Query? A link to something like 101 Linq Examples would be fantastic, but with a Dynamic Query scope. I really want to pass to this method the field in quotes "" for which I want a list of options and how many products have that option. from p in db.Products group p by p.ProductSize into g select new Category { PropertyType = g.Key, Count = g.Count() } Each DDL option will have "The selection (21)" where the (21) is the quantity of products that have that attribute. Upon selecting an option, all other remaining DDLs will update with the remaining options and counts. Edit: Additional notes: .OrderBy("it.City") // "it" refers to the entire record .GroupBy("City", "new(City)") // This produces a unique list of City .Select("it.Count()") //This gives a list of counts... getting closer .Select("key") // Selects a list of unique City .Select("new (key, count() as string)") // +1 to me LOL. key is a row of group .GroupBy("new (City, Manufacturer)", "City") // New = list of fields to group by .GroupBy("City", "new (Manufacturer, Size)") // Second parameter is a projection Product .Where("ProductType == @0", "Maps") .GroupBy("new(City)", "new ( null as string)")// Projection not available later? .Select("new (key.City, it.count() as string)")// GroupBy new makes key an object Product .Where("ProductType == @0", "Maps") .GroupBy("new(City)", "new ( null as string)")// Projection not available later? .Select("new (key.City, it as object)")// the it object is the result of GroupBy var a = Product .Where("ProductType == @0", "Maps") .GroupBy("@0", "it", "City") // This fails to group Product at all .Select("new ( Key, it as Product )"); // "it" is property cast though What I have learned so far is LinqPad is fantastic, but still looking for an answer. Eventually, completely random research like this will prevail I guess. LOL. Edit:

    Read the article

  • C# Chain-of-responsibility with delegates

    - by nettguy
    For my understanding purpose i have implemented Chain-Of-Responsibility pattern. //Abstract Base Type public abstract class CustomerServiceDesk { protected CustomerServiceDesk _nextHandler; public abstract void ServeCustomers(Customer _customer); public void SetupHadler(CustomerServiceDesk _nextHandler) { this._nextHandler = _nextHandler; } } public class FrontLineServiceDesk:CustomerServiceDesk { public override void ServeCustomers(Customer _customer) { if (_customer.ComplaintType == ComplaintType.General) { Console.WriteLine(_customer.Name + " Complaints are registered ; will be served soon by FrontLine Help Desk.."); } else { Console.WriteLine(_customer.Name + " is redirected to Critical Help Desk"); _nextHandler.ServeCustomers(_customer); } } } public class CriticalIssueServiceDesk:CustomerServiceDesk { public override void ServeCustomers(Customer _customer) { if (_customer.ComplaintType == ComplaintType.Critical) { Console.WriteLine(_customer.Name + "Complaints are registered ; will be served soon by Critical Help Desk"); } else if (_customer.ComplaintType == ComplaintType.Legal) { Console.WriteLine(_customer.Name + "is redirected to Legal Help Desk"); _nextHandler.ServeCustomers(_customer); } } } public class LegalissueServiceDesk :CustomerServiceDesk { public override void ServeCustomers(Customer _customer) { if (_customer.ComplaintType == ComplaintType.Legal) { Console.WriteLine(_customer.Name + "Complaints are registered ; will be served soon by legal help desk"); } } } public class Customer { public string Name { get; set; } public ComplaintType ComplaintType { get; set; } } public enum ComplaintType { General, Critical, Legal } void Main() { CustomerServiceDesk _frontLineDesk = new FrontLineServiceDesk(); CustomerServiceDesk _criticalSupportDesk = new CriticalIssueServiceDesk(); CustomerServiceDesk _legalSupportDesk = new LegalissueServiceDesk(); _frontLineDesk.SetupHadler(_criticalSupportDesk); _criticalSupportDesk.SetupHadler(_legalSupportDesk); Customer _customer1 = new Customer(); _customer1.Name = "Microsoft"; _customer1.ComplaintType = ComplaintType.General; Customer _customer2 = new Customer(); _customer2.Name = "SunSystems"; _customer2.ComplaintType = ComplaintType.Critical; Customer _customer3 = new Customer(); _customer3.Name = "HP"; _customer3.ComplaintType = ComplaintType.Legal; _frontLineDesk.ServeCustomers(_customer1); _frontLineDesk.ServeCustomers(_customer2); _frontLineDesk.ServeCustomers(_customer3); } Question Without breaking the chain-of-responsibility ,how can i apply delegates and events to rewrite the code?

    Read the article

  • IIS doing unexpected redirect

    - by user2967489
    I have website abc.com and abc.co.in.I have two webservers also. The following issue happens only in abc.co.in with same application deployed on same. We have written a custom IHttpModule and do a rewrite to abc.co.in?some=data. Expected behavior: When user enters some.abc.co.in the expected behavior is browser still display some.abc.co.in but internally call abc.co.in?some=data Actual behavior: The page is rendered properly but in browser the URL changes to some.abc.co.in?some=data I checked what is happening 1.First the server receives the request and does a 301 redirect. 2.The redirect location is some.abc.co.in?some=data I am stuck in this for a day and critical to fix to make our site up and running. How to debug this issue further ?.Any one can think of possible cause? ETW Trace shows <ApplicationData> <TraceData> <DataItem> <OldUrl>/</OldUrl> <NewUrl>/fp?&id=hazzel&params=</NewUrl> </DataItem> </TraceData> </ApplicationData> <ApplicationData> <TraceData> <DataItem> <ModuleName>DefaultDocumentModule</ModuleName> <Notification>128</Notification> <HttpStatus>301</HttpStatus> <HttpReason>Moved Permanently</HttpReason> </DataItem> </TraceData> </ApplicationData> <ApplicationData> <TraceData> <DataItem> <Headers>Content-Type: text/html; charset=UTF-8 Location: http://some.abc.co.in/fp/?id=data Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET </Headers> </DataItem> </TraceData> </ApplicationData>

    Read the article

  • php adding images to another image, exact positioning

    - by user271619
    I have a cool snippet of code that works well, except one thing. The code will take an icon I want to add to an existing picture. I can position it where I want too! Which is exactly what I need to do. However, I'm stuck on one thing, concerning the placement. The code "starting position" (on the main image: navIcons.png) is from the Bottom Right. I have 2 variables: $move_left = 10; & $move_up = 8;. So, the means I can position the icon.png 10px left, and 8px up, from the bottom right corner. I really really want to start the positioning from the Top Left of the image, so I'm really moving the icon 10px right & 8px down, from the top left position of the main image. Can someone look at my code and see if I'm just missing something that inverts that starting position? function attachIcon($imgname) { $mark = imagecreatefrompng($imgname); imagesavealpha($mark, true); list($icon_width, $icon_height) = getimagesize($imgname); $img = imagecreatefrompng('images/sprites/navIcons.png'); imagesavealpha($img, true); $move_left = 10; $move_up = 9; list($mainpic_width, $mainpic_height) = getimagesize('images/sprites/navIcons.png'); imagecopy($img, $mark, $mainpic_width-$icon_width-$move_left, $mainpic_height-$icon_height-$move_up, 0, 0, $icon_width, $icon_height); imagepng($img); // display the image + positioned icon in the browser //imagepng($img,'newnavIcon.png'); // rewrite the image with icon attached. } header('Content-Type: image/png'); attachIcon('icon.png'); ? For those who are wondering why I'd even bother doing this. In a nutshell, I like to add 16x16 icons to 1 single image, while using css to display that individual icon. This does involve me downloading the image (sprite) and open photoshop, add the new icon (positioning it), and reuploading it to the server. Not a massive ordeal, but just having fun with php.

    Read the article

  • XPathNavigator in Silverlight

    - by vladimir
    I have a code library that makes heavy use of XPathNavigator to parse some specific xml document. The xml document is cross-referenced, meaning that an element can reference another which has not yet been encountered during parsing: <ElementA ...> <DependentElementX id="1234"> </ElementA> <ElementX id="1234" .../> The document doesn't really look like this, but the point is that 1) there is an xml schema that enforces the overall document structure, 2) elements inside the document can reference each other using some IDs, and 3) there is quite a few such cross references between different elements in the document. The document is parsed in two phases. In the first pass I walk through the document XPathDocument doc = ...; XPathNavigator nav = doc.CreateNavigator(); nav.MoveToRoot(); nav.MoveToFirstChild()... and occasionally 'bookmark' the current position (element) in the document using XPathNavigator.Clone() method. This gives me a lightweight instance of an XPathNavigator which I can store somewhere and use later to jump back to a particular place (element) in my document. Once I have enough information collected in the first pass (for example, I have made sure there is indeed an ElementX with an id='1234'), I jump back to saved bookmarks (using those saved XPathNavigators) and complete the parsing. Well, now I'm about to use this library in Silverlight 3.0 and to my horror the XPathNavigator is not in the System.Xml assembly. Questions: 1) Am I missing something obvious (i.e. XPathNavigator does exist in some shape or form, for example in a toolkit or a freeware library)? 2) If I do have to make modifications in the code, what would be the best way to go? Ideally, I would like to make minimal changes, not to rewrite 80% of the code just to be able to use something like XLinq. To resume, in case I have to give up XPathNavigator, all I need is a way to bookmark places in my document and to get back to them so that I can continue to iterate from where I left off. Thanks in advance for any help/ideas.

    Read the article

  • MemoryStream, XmlTextWriter and Warning 4 CA2202 : Microsoft.Usage

    - by rasx
    The Run Code Analysis command in Visual Studio 2010 Ultimate returns a warning when seeing a certain pattern with MemoryStream and XmlTextWriter. This is the warning: Warning 7 CA2202 : Microsoft.Usage : Object 'ms' can be disposed more than once in method 'KinteWritePages.GetXPathDocument(DbConnection)'. To avoid generating a System.ObjectDisposedException you should not call Dispose more than one time on an object.: Lines: 421 C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 421 Songhay.DataAccess.KinteWritePages This is the form: static XPathDocument GetXPathDocument(DbConnection connection) { XPathDocument xpDoc = null; var ms = new MemoryStream(); try { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { writer.WriteStartDocument(); writer.WriteStartElement("data"); do { while(reader.Read()) { writer.WriteStartElement("item"); for(int i = 0; i < reader.FieldCount; i++) { writer.WriteRaw(String.Format("<{0}>{1}</{0}>", reader.GetName(i), reader[i].ToString())); } writer.WriteFullEndElement(); } } while(reader.NextResult()); writer.WriteFullEndElement(); writer.WriteEndDocument(); writer.Flush(); ms.Position = 0; xpDoc = new XPathDocument(ms); } } } finally { ms.Dispose(); } return xpDoc; } The same kind of warning is produced for this form: XPathDocument xpDoc = null; using(var ms = new MemoryStream()) { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } } return xpDoc; By the way, the following form produces another warning: XPathDocument xpDoc = null; var ms = new MemoryStream(); using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } return xpDoc; The above produces the warning: Warning 7 CA2000 : Microsoft.Reliability : In method 'KinteWritePages.GetXPathDocument(DbConnection)', object 'ms' is not disposed along all exception paths. Call System.IDisposable.Dispose on object 'ms' before all references to it are out of scope. C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 383 Songhay.DataAccess.KinteWritePages In addition to the following, what are my options?: Supress warning CA2202. Supress warning CA2000 and hope that Microsoft is disposing of MemoryStream (because Reflector is not showing me the source code). Rewrite my legacy code to recognize the wonderful XDocument and LINQ to XML.

    Read the article

  • XML Return from an Oracle Stored Procedure

    - by Tequila Jinx
    Unfortunately most of my DB experience has been with MSSQL which tends to hold your hand a lot more than Oracle. What I'm trying to do is fairly trivial in tSQL, however, pl/sql is giving me a headache. I have the following procedure: CREATE OR REPLACE PROCEDURE USPX_GetUserbyID (USERID USERS.USERID%TYPE, USERRECORD OUT XMLTYPE) AS BEGIN SELECT XMLELEMENT("user" , XMLATTRIBUTES(u.USERID AS "userid", u.companyid as "companyid", u.usertype as "usertype", u.status as "status", u.personid as "personid") , XMLFOREST( p.FIRSTNAME AS "firstname" , p.LASTNAME AS "lastname" , p.EMAIL AS "email" , p.PHONE AS "phone" , p.PHONEEXTENSION AS "extension") , XMLELEMENT("roles", (SELECT XMLAGG(XMLELEMENT("role", r.ROLETYPE)) FROM USER_ROLES r WHERE r.USERID = USERID AND r.ISACTIVE = 1 ) ) , XMLELEMENT("watches", (SELECT XMLAGG( XMLELEMENT("watch", XMLATTRIBUTES(w.WATCHID AS "id", w.TICKETID AS "ticket") ) ) FROM USER_WATCHES w WHERE w.USERID = USERID AND w.ISACTIVE = 1 ) ) ) AS "RESULT" INTO USERRECORD FROM USERS u LEFT JOIN PEOPLE p ON p.PERSONID = u.PERSONID WHERE u.USERID = USERID; END USPX_GetUserbyID; When executed, it should return an XML document with the following structure: <user userid="" companyid="" usertype="" status="" personid=""> <firstname /> <lastname /> <email /> <phone /> <extension /> <roles> <role /> </roles> <watches> <watch id="" ticket="" /> </watches> </user> When I execute the query itself, replacing the USERID parameter with a string and removing the "into" clause, the query runs fine and returns the expected structure. However, when the procedure attempts to execute the query, passing the results of the XMLELEMENT function into the USERRECORD output parameter, I get the following exception: Error report: ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at "USPX_GETUSERBYID", line 4 ORA-06512: at line 3 01422. 00000 - "exact fetch returns more than requested number of rows" *Cause: The number specified in exact fetch is less than the rows returned. *Action: Rewrite the query or change number of rows requested I'm baffled trying to nail this down, and unfortunately my google-fu hasn't helped. I've found plenty of Oracle SQL|XML examples, but none that deal with XML returns from a procedure. Note: I know that an alternate method of retrieving XML using DBMS methods exists, however, it's my understanding that that functionality is deprecated in favor of SQL|XML.

    Read the article

  • How can I inject Javascript (including Prototype.js) in other sites without cluttering the global na

    - by Daniel Magliola
    I'm currently on a project that is a big site that uses the Prototype library, and there is already a humongous amount of Javascript code. We're now working on a piece of code that will get "injected" into other people's sites (picture people adding a <script> tag in their sites) which will then run our code and add a bunch of DOM elements and functionality to their site. This will have new pieces of code, and will also reuse a lot of the code that we use on our main site. The problem I have is that it's of course not cool to just add a <script> that will include Prototype in people's pages. If we do that in a page that's already using ANY framework, we're guaranteed to screw everything up. jQuery gives us the option to "rename" the $ object, so it could handle this situation decently, except obviously for the fact that we're not using jQuery, so we'd have to migrate everything. Right now i'm contemplating a number of ugly choices, and I'm not sure what's best... Rewrite everything to use jQuery, with a renamed $ object everywhere. Creating a "new" Prototype library with only the subset we'd be using in "injected" code, and renaming $ to something else. Then again I'd have to adapt the parts of my code that would be shared somehow. Not using a library at all in injected code, to keep it as clean as possible, and rewriting the shared code to use no library at all. This would obviously degenerate into us creating our own frankenstein of a library, which is probably the worst case scenario ever. I'm wondering what you guys think I could do, and also whether there's some magic option that would solve all my problems... For example, do you think I could use something like Caja / Cajita to sandbox my own code and isolate it from the rest of the site, and have Prototype inside of there? Or am I completely missing the point with that? I also read once about a technique for bookmarklets, were you add your code like this: (function() { /* your code */ })(); And then your code is all inside your anonymous function and you haven't touched the global namespace at all. Do you think I could make one file containing: (function() { /* Full Code of the Prototype file here */ /* All my code that will run in the "other" site */ InitializeStuff_CreateDOMElements_AttachEventHandlers(); })(); Would that work? Would it accomplish the objective of not cluttering the global namespace, and not killing the functionality on a site that uses jQuery, for example? Or is Prototype too complex somehow to isolate it like that? (NOTE: I think I know that that would create closures everywhere and that's slower, but I don't care too much about performance, my code is not doing anything that complex)

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

  • kXML (XmlPullParser) not hitting END_TAG

    - by Tejaswi Yerukalapudi
    Hello all, I'm trying to figure out a way to rewrite some of my XML parsing code. I'm currently working with kXML2 and here's my code - byte[] xmlByteArray; try { xmlByteArray = inputByteArray; ByteArrayInputStream xmlStream = new ByteArrayInputStream(xmlByteArray); InputStreamReader xmlReader = new InputStreamReader(xmlStream); KXmlParser parser = new KXmlParser(); parser.setInput(xmlReader); parser.nextTag(); while(true) { int eventType = parser.next(); String tag = parser.getName(); if(eventType == XmlPullParser.START_TAG) { System.out.println("****************** STARTING TAG "+tag+"******************"); if(tag == null || tag.equalsIgnoreCase("")) { continue; } else if(tag.equalsIgnoreCase("Category")) { // Gets the name of the category. String attribValue = parser.getAttributeValue(0); } } if(eventType == XmlPullParser.END_TAG) { System.out.println("****************** ENDING TAG "+tag+"******************"); } else if(eventType == XmlPullParser.END_DOCUMENT) { break; } } catch(Exception ex) { } My input XML is as follows - <root xmlns:sql="urn:schemas-microsoft-com:xml-sql" xmlns=""> <Category name="xyz"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> <Category name="abc"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> <Category name="def"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> My problem briefly is, I'm expecting it to hit XmlPullParser.END_TAG when it encounters a closing xml tag. It does hit the XmlPullParser.START_TAG but it just seems to skip / ignore all the END_TAGs. Is this how is it's supposed to work? Or am I missing something? Any help is much appreciated, Teja.

    Read the article

  • How to make freelance clients understand the costs of developing and maintaining mature products?

    - by John
    I have a freelance web application project where the client requests new features every two weeks or so. I am unable to anticipate the requirements of upcoming features. So when the client requests a new feature, one of several things may happen: I implement the feature with ease because it is compatible with the existing platform I implement the feature with difficulty because I have to rewrite a significant portion of the platform's foundation Client withdraws request because it costs too much to implement against existing platform At the beginning of the project, for about six months, all feature requests fell under category 1) because the system was small and agile. But for the past six months, most feature implementation fell under category 2). The system is mature, forcing me to refactor and test everytime I want to add new modules. Additionally, I find myself breaking things that use to work, and fixing it (I don't get paid for this). The client is starting to express frustration at the time and cost for me to implement new features. To them, many of the feature requests are of the same scale as the features they requested six months ago. For example, a client would ask, "If it took you 1 week to build a ticketing system last year, why does it take you 1 month to build an event registration system today? An event registration system is much simpler than a ticketing system. It should only take you 1 week!" Because of this scenario, I fear feature requests will soon land in category 3). In fact, I'm already eating a lot of the cost myself because I volunteer many hours to support the project. The client is often shocked when I tell him honestly the time it takes to do something. The client always compares my estimates against the early months of a project. I don't think they're prepared for what it really costs to develop, maintain and support a mature web application. When working on a salary for a full time company, managers were more receptive of my estimates and even encouraged me to pad my numbers to prepare for the unexpected. Is there a way to condition my clients to think the same way? Can anyone offer advice on how I can continue to work on this web project without eating too much of the cost myself? Additional info - I've only been freelancing full time for 1 year. I don't yet have the high end clients, but I'm slowly getting there. I'm getting better quality clients as time goes by.

    Read the article

  • How to convert a 32bpp image to an indexed format?

    - by Ed Swangren
    So here are the details (I am using C# BTW): I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness 240) red. To do so, I need to get the image into an indexed format. I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods: // causes a "Parameter not valid" error Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed) // no error, but the resulting image is black due to information loss I assume Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed) I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know. EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish. We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image. The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently. When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I could use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • Exception opening TAdoDataset: Arguments are of the wrong type, are out of acceptable range, or are

    - by Dave Falkner
    I've been trying to debug the following problem for several weeks now - this method is called from several places within the same datamodule, but this exception (from the subject line of this post) only occurs when integers for a certain purpose (pickup orders vs. orders that we ship through a carrier) are used - and don't ask me how the application can tell the difference between one integer's purpose and another! Furthermore, I cannot duplicate this issue on my machine - the error occurs on a warehouse machine but not my own development machine, even when working with the same production database. I have suspected an MDAC version conflict between the two machines, but have run a version checker and confirmed that both machines are running 2.8, and additionally have confirmed this by logging the TAdoDataset's .Version property at runtime. function TdmESShip.SecondaryID(const PrimaryID : Integer ): String; begin try with qESPackage2 do begin if Active then Close; LogMessage('-----------------------------------'); LogMessage('Version: ' + FConnection.Version); LogMessage('DB Info: ' + FConnection.Properties['Initial Catalog'].Value + ' ' + FConnection.Properties['Data Source'].Value); LogMessage('Setting the parameter.'); Parameters.ParamByName('ParameterName').Value := PrimaryID; LogMessage('Done setting the parameter.'); Open; Ninety-nine times out of 100 this logging code logs a successful operation as follows: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. Opened the dataset. But then whenever a "pickup" order is processed, this exception gets thrown whenever the dataset is opened: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. GetESPackageID() threw an exception. Type: EOleException, Message: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another for packageID 10813711 I've tried eliminating the parameter and have built the commandtext for this dataset programmatically, suspecting that some part of the TParameter's configuration might be out of whack, but the same error occurs under the same circumstances. I've tried every combination of TParameter properties that I can think of - this is the millionth TParameter I've created for my millionth dataset, and I've never encountered this error. I've even created a second dataset from scratch and removed all references to the original dataset in case some property of the original dataset in the .dfm might be corrupted, but the same error occurs under the same circumstances. The commandtext for this dataset is a simple select ValueA from TableName where ValueB = @ParameterB I'm about ready to do something extreme, such as writing a web service to look these values up - it feels right now as though I could destroy my machine, rebuild it, rewrite this entire application from scratch, and the application would still know to throw an exception whenever I try to look up a secondary value from a primary value, but only for pickup orders, and only from the one machine in the warehouse, but I'm probably missing something simple. So, any help anyone could provide would be greatly appreciated.

    Read the article

  • how to get latest entry from a table for an item and do arithmatic operation on it?

    - by I Like PHP
    i have below tables tbl_rcv_items st_id | item_id |stock_opening_qnty |stock_received_qnty |stock_rcvd_date 14 1 0 70 2010-05-18 15 16 0 100 2010-05-06 16 10 0 59 2010-05-20 17 14 0 34 2010-05-20 20 1 70 5 2010-05-12 tbl_issu_items issue_id refer_issue_id item_id item_qntt item_updated 51 1 1 5 2010-05-18 19:34:29 52 1 16 6 2010-05-18 19:34:29 53 1 10 7 2010-05-18 19:34:29 54 1 14 8 2010-05-18 19:34:29 75 7 1 12 2010-05-18 19:40:52 76 7 16 1 2010-05-18 19:40:52 77 7 10 1 2010-05-18 19:40:52 78 7 14 1 2010-05-18 19:40:52 79 8 1 3 2010-05-19 11:28:50 80 8 16 5 2010-05-19 11:28:50 81 8 10 6 2010-05-19 11:28:50 82 8 14 7 2010-05-19 11:28:51 87 10 1 2 2010-05-19 12:51:03 88 10 16 0 2010-05-19 12:51:03 89 10 10 0 2010-05-19 12:51:03 90 10 14 0 2010-05-19 12:51:03 91 14 1 1 2010-05-19 18:43:58 92 14 14 3 2010-05-19 18:43:58 tbl_item_detail item_id item_name 1 shirt 2 belt 10 ball pen 14 vim powder 16 pant NOW if i want total available quantity for each item till today using both table total available quantity for an item =stock_opening_qnty+stock_received_qnty(LATEST ENTRY FROM (tbl_rcv_item) for that item id according to stock_rcvd_date) - SUM(item_qntt) for eg: if i want to know the available quantity for item_id=1 till today(25-05-2010) then it shoud be 70+5(latest entry for item_id till 25/5/2010)-23( issued till 25/5/2010)=52 i write below query , SELECT tri.item_id, tid.item_name, (tri.stock_opening_qnty + tri.stock_received_qnty) AS totalRcvQntt, SUM( tii.item_qntt ) AS totalIsudQntt FROM tbl_rcv_items tri JOIN tbl_issu_items tii ON tii.item_id = tri.item_id JOIN tbl_item_detail tid ON tid.item_id=tri.item_id WHERE tri.stock_rcvd_date <= CURDATE() GROUP BY (tri.item_id) which results Array ( [0] => Array ( [item_id] => 1 [item_name] => shirt [totalRcvQntt] => 70 [totalIsudQntt] => 46 ) [1] => Array ( [item_id] => 10 [item_name] => ball pen [totalRcvQntt] => 59 [totalIsudQntt] => 16 ) [2] => Array ( [item_id] => 14 [item_name] => vim powder [totalRcvQntt] => 34 [totalIsudQntt] => 20 ) [3] => Array ( [item_id] => 16 [item_name] => pant [totalRcvQntt] => 100 [totalIsudQntt] => 17 ) ) in above result total isuse quantity for shirt(item_id=1) shoube be 23 whereas results reflects 46 bcoz there are two row regrading item_id=1 in tbl_rcv_items, i only need the latest one(means which stock_rcvd_date is less than tommorow) please tell me where i doing mistake?? or rewrite the best query. thanks a lot!

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >