Search Results

Search found 1004 results on 41 pages for 'layers'.

Page 31/41 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Why won't my code work in Ubuntu Server 11.10? Is it because of gd library?

    - by Derrick
    I get this error when running the following code: No such file found at "widgets/104-text.png" I know that the code works because it works on my other non-ubuntu server. I do not know if it is gd library or what. I tried both the bundled version and the non-bundled and both do not make this code work. $con = mysql_connect("localhost","user","abc123"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("satabase_name", $con); $productid2 = $this->product->id; $thename = mysql_query("SELECT * FROM pshop_product_lang WHERE id_product = '$productid2' LIMIT 1"); $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png'); $getimg110 = mysql_query("SELECT * FROM pshop_image WHERE id_product = '$productid2'"); $gotimg110 = mysql_fetch_array($getimg110); $slash110 = addcslashes($gotimg110[id_image], '\0..\999999999999999999999'); $str110 = str_replace('\\', '/', $slash110); $newimg110 = '<img src="img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg" />'; include("conf.inc.php"); include('ImageWorkshop.php'); // Initialization of layer you need $pinguLayer = new ImageWorkshop(array( 'imageFromPath' => 'widgets/background.png', )); $pinguLayer2 = new ImageWorkshop(array( 'imageFromPath' => 'img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg', )); $pinguLayer3 = new ImageWorkshop(array( 'imageFromPath' => 'widgets/' . $productid2 . '-text.png', )); // resize pingu layer $thumbWidth2 = 150; // px $thumbHeight2 = 150; $thumbWidth = 400; // px $thumbHeight = 200; $pinguLayer2->resizeInPixel($thumbWidth2, $thumbHeight2); $pinguLayer->resizeInPixel($thumbWidth, $thumbHeight); // Add 2 layers on pingu layer $pinguLayer->addLayerOnTop($pinguLayer2, null, null, 'LM'); $pinguLayer->addLayerOnTop($pinguLayer3, 70, 25, 'MM'); // Saving the result in a folder $pinguLayer->save("widgets/", $productid2 . ".gif", true, null, 95); The file path is correct, however this part of the code is not creating the image as it is supposed to: $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png');

    Read the article

  • How to populate a generic list of objects in C# from SQL database

    - by developr
    I am just learning ASP.NET c# and trying to incorporate best practices into my applications. Everything that I read says to layer my applications into DAL, BLL, UI, etc based on separation of concerns. Instead of passing datatables around, I am thinking about using custom objects so that I am loosely coupled to my data layer and can take advantage of intellisense in VS. I assume these objects would be considered DTOs? First, where do these objects reside in my layers? BLL, DAL, other? Second, when populating from SQL, should I loop through a data reader to populate the list or first fill a data table, then loop through the table to populate the list? I know you should close the database connection as soon as possible, but it seems like even more overhead to populate the data table and then loop through that for the list. Third, everything I see these days says use Linq2SQL. I am planning to learn Linq2SQL, but at this time I am working with a legacy database that doesn't have foreign keys setup and I do not have the ability to fix it atm. Also, I want to learn more about c# before I start getting into ORM solutions like nHibernate. At the same time I don't want to type out all the connection and SQL plumbing for every query. Is it ok to use the Enterprise DAAB for now?

    Read the article

  • Matplotlib pick event order for overlapping artists

    - by Ajean
    I'm hitting a very strange issue with matplotlib pick events. I have two artists that are both pickable and are non-overlapping to begin with ("holes" and "pegs"). When I pick one of them, during the event handling I move the other one to where I just clicked (moving a "peg" into the "hole"). Then, without doing anything else, a pick event from the moved artist (the peg) is generated even though it wasn't there when the first event was generated. My only explanation for it is that somehow the event manager is still moving through artist layers when the event is processed, and therefore hits the second artist after it is moved under the cursor. So then my question is - how do pick events (or any events for that matter) iterate through overlapping artists on the canvas, and is there a way to control it? I think I would get my desired behavior if it moved from the top down always (rather than bottom up or randomly). I haven't been able to find sufficient enough documentation, and a lengthy search on SO has not revealed this exact issue. Below is a working example that illustrates the problem, with PathCollections from scatter as pegs and holes: import matplotlib.pyplot as plt import sys class peg_tester(): def __init__(self): self.fig = plt.figure(figsize=(3,1)) self.ax = self.fig.add_axes([0,0,1,1]) self.ax.set_xlim([-0.5,2.5]) self.ax.set_ylim([-0.25,0.25]) self.ax.text(-0.4, 0.15, 'One click on the hole, and I get 2 events not 1', fontsize=8) self.holes = self.ax.scatter([1], [0], color='black', picker=0) self.pegs = self.ax.scatter([0], [0], s=100, facecolor='#dd8800', edgecolor='black', picker=0) self.fig.canvas.mpl_connect('pick_event', self.handler) plt.show() def handler(self, event): if event.artist is self.holes: # If I get a hole event, then move a peg (to that hole) ... # but then I get a peg event also with no extra clicks! offs = self.pegs.get_offsets() offs[0,:] = [1,0] # Moves left peg to the middle self.pegs.set_offsets(offs) self.fig.canvas.draw() print 'picked a hole, moving left peg to center' elif event.artist is self.pegs: print 'picked a peg' sys.stdout.flush() # Necessary when in ipython qtconsole if __name__ == "__main__": pt = peg_tester() I have tried setting the zorder to make the pegs always above the holes, but that doesn't change how the pick events are generated, and particularly this funny phantom event.

    Read the article

  • Good examples of MapServer / OpenLayers

    - by MarkJ
    I want to convince some clients to use MapServer and OpenLayers. Please can anyone suggest attractive websites to show off the possiblities! The clients will be impressed by: A density map (otherwise known as a heat map, colour-shaded grid coverage, contour plot...). The ability for the user to download the underlying data for the density map, restricted to the area being viewed, in some format such as netCDF. Standard OpenLayers stuff. Zooming, panning, scale bar, overview map... Different base layers. Could be WMS, Google, Bing... Searching for a placename, map is panned to display the place. MapServer.org seems to be down right now :( But from memory their examples didn't have the "wow" factor. The OpenLayers examples demonstrate only one or two features per example - I want something to wow the clients by showing all the capabilities in one example. PS If you have good examples that use some other open source tools, post them by all means. But just JavaScript please: customer says no rich client.

    Read the article

  • Application/Server dependency mapping

    - by David Stratton
    I'm just curious as to whether such as tool exists (free, open source, or commercial but for a reasonable price) before I build it myself. We're looking for a simple solution to simplify taking web apps online and offline when a server is undergoing maintenance. The idea is that we be able to mark a server as unavailable, and then mark all dependent (direct and indirect) as offline. Our first proof-of-concept is running, and we created an aspx page that lists various applications that have an App_Offline.html file with a friendly "Down for Maintenance" message in a GridView. In the GridView, each app has a LinkButton that, when clicked, either renames the App_Offline.htm to App_Offline.html or vice-versa to take the app online and offline. The next step is to set up all of our dependencies. For example, our store locater would be dependent on our web services, which in turn are dependent on our SQL Server. (that's a simple example. We can easily have several layers, or one app dependent on multiple servers, etc.) In this example, if the SQL server goes down, we would need to drill down recursively to find all apps that depend on it, and then turn them off and on by renaming the App_Offline file appropriately. I realize this will be relatively simple to build, but could be complex to manage. I'm sure we're not the first team to think of this concept, and I'm wondering if there are any open source tools, or if any of you have done something similar and can help us avoid pitfalls. Edit - Update I found the category of software I'm looking for. it's called CMDB - (Configuration Management Database), and it's generally more of a Network Admin type tool than a developer tool. I found some open source products in this category, but none written in .NET. I had considered moving this question to ServerFault.com when I realized I was looking for a netowrk Admin type tool, but since I'm looking for code and a modifiable solution I'll keep the question here.

    Read the article

  • CALayer won't display

    - by Paul from Boston
    I'm trying to learn how to use CALayers for a project and am having trouble getting sublayers to display. I created a vanilla View-based iPhone app in XCode for these tests. The only real code is in the ViewController which sets up the layers and their delegates. There is a delegate, DelegateMainView, for the viewController's view layer and a second different one, DelegateStripeLayer, for an additional layer. The ViewController code is all in awakeFromNib, - (void)awakeFromNib { DelegateMainView *oknDelegate = [[DelegateMainView alloc] init]; self.view.layer.delegate = oknDelegate; CALayer *newLayer = [CALayer layer]; DelegateStripeLayer *sldDelegate = [[DelegateStripeLayer alloc] init]; newLayer.delegate = sldDelegate; [self.view.layer addSublayer:newLayer]; [newLayer setNeedsDisplay]; [self.view.layer setNeedsDisplay]; } The two different delegates are simply wrappers for the CALayer delegate method, drawLayer:inContext:, i.e., - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context { CGRect bounds = CGContextGetClipBoundingBox(context); ... do some stuff here ... CGContextStrokePath(context); } each a bit different. The layer, view.layer, is drawn properly but newLayer is never drawn. If I put breakpoints in the two delegates, the program stops in DelegateMainView but never reaches DelegateStripeLayer. What am I missing here? Thanks.

    Read the article

  • WCF Service Layer in n-layered application: performance considerations

    - by Marconline
    Hi all. When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years. When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have: UI Layer and Presentation Layer (Model View Pattern) Service Layer (WCF) Business Layer Data Access Layer Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them? Question #0: is this a good enterpise application template in your opinion? Question #1: where should I host the service layer? Should it be a Windows Service or what else? Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree? Question #3: if you agree with me at Question 2, which kind of binding would you use? Looking forward to hear from you. Have a nice weekend! Marco

    Read the article

  • Mimic CALayer shadow properties found in iPhone OS 3.2 for OS 3.1

    - by niblha
    The CALayer shadow properties like shadowOffset, shadowRadius, shadowColor are not available in iPhone OS versions below 3.2 and I'm wondering how I could mimic that functionality for use with 3.1 and below. I want to use this to be able to add drop shadows to UIViews in a clean way so that the shadows are drawn at layer level somehow, and not by drawing it in a view's -(void)drawRect:(CGRect)rect method which requires to shrink the actual views frame to accomodate for the shadow. (This shrinking approach have been proposed in the other UIView drop shadow related questions I found here on SO). I was thinking a layered approach would be cleaner. For example I tried creating subclassing CALayer to which I added a separate shadow layer as a sublayer, but then that would be drawn on top of whatever was draw in the drawRect: method of the UIView that had the main layer as backing layer. I've also tried implementing the subclass CALayer's drawInContext: something like this, - (void)drawInContext:(CGContextRef)ctx { // code to draw shadow for a frame the size of the layer's frame [super drawInContext:ctx]; } But then the shadow is still clipped to the current clipping bounding box of the context, which seems to be the layers own frame. I also had some idea of redirecting the drawing of the main layer to a sublayer, which would be placed above another sublayer which had the shadow drawn onto it. Then I would probably get rid of the clipping and the shadow would be farthest away. But I couldn't really wrap my head around how I would do that, and it doesn't really feel like a clean approach. Any ideas on how to go about this? Just to make clear how my UIView drop shadow related question is different from the other ones I found here on SO; I do not want to shrink the actual drawing frame of a UIView to accomodate for a shadow. I want it to somehow be on a separate layer in the background, whithout beeing clipped.

    Read the article

  • NHibernate Generators

    - by Dan
    What is the best tool for generating Entity Class and/or hbm files and/or sql script for NHibernate. This list below is from http://www.hibernate.org/365.html, which is the best any why? Moregen Free, Open Source (GPL) O/R Generator that can merge into existing Visual Studio Projects. Also merges changes to generated classes. NConstruct Lite Free tool for generating NHibernate O/R mapping source code. Different databases support (Microsoft SQL Server, Oracle, Access). GENNIT NHibernate Code Generator Free/Commercial Web 2.0 code generation of NHibernate code using WYSIWYG online UML designer. GenWise Studio with NHibernate Template Commercial product; Imports your existing database and generates all XML and Classes, including factories. It can also generate a asp.net web-application for your NHibernate BO-Layer automatically. HQL Analyzer and hbm.xml GUI Editor ObjectMapper by Mats Helander is a mapping GUI with NHibernate support MyGeneration is a template-based code generator GUI. Its template library includes templates for generating mapping files and classes from a database. AndroMDA is an open-source code generation framework that uses Model Driven Architecture (MDA) to transform UML models into deployable components. It supports generation of data access layers that use NHibernate as their persistence framework. CodeSmith Template for NH NHibernate Helper Kit is a VS2005 add-in to generate classes and mapping files. NConstruct - Intelligent Software Factory Commercial product; Full .NET C# source code generation for all tiers of the information system trough simple wizard procedure. O/R mapping based on NHibernate. For both WinForms and ASP.NET 2.0.

    Read the article

  • C# using namespace directive in nested namespaces

    - by MoSlo
    Right, I've usually used 'using' directives as follows using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq } i've recently seen examples of such as using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq namespace DataLibrary { using System.Data; //Data access layers and whatnot } } Granted, i understand that i can put USING inside of my namespace declaration. Such a thing makes sense to me if your namespaces are in the same root (they organized). System; namespace 1 {} namespace 2 { System.data; } But what of nested namespaces? Personally, I would leave all USING declarations at the top where you can find them easily. Instead, it looks like they're being spread all over the source file. Is there benefit to the USING directives being used this way in nested namespaces? Such as memory management or the JIT compiler?

    Read the article

  • WCF for a shared data access

    - by Audrius
    Hi all, I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved: A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute. My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data - multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right". What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it? What approach would you take? Project is still in a very early stage so complete design re-do is not out of question. All ideas are appreciated. Thanks! UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules. UPDATE 2: a separate benchmark project will be kicked up that should use MSSQL as a data backend (instead on in-memory list).

    Read the article

  • Open Source Web Frameworks : Security

    - by trappedIntoCode
    How secure are popular open source web frameworks? I am particularly interested in popular frameworks like Rails and DJango. If I am building a site which is going to do heavy e-commerce, is it Ok to use frameworks like DJango and Satchmo? Is security compromised because their open architecture ? I know being OS does not mean being down right open to hackers, Linux uses superb authentication mechanism, but web is a different game. What can be done in this regard? UPDATE: Thanks for answers guys. I understand that I will have to find a suitable hosting service for a secure e-commerce application and that additional layers of security will be needed. I understand that Django and Rails have been designed keeping security aspects in mind, the most common form attacks like XSS, Injections etc. (Django book has a ch on Security) I was expecting comments from security Gurus. If you are a security Guru, would you recommend an important site, which is likely going to be popular, to be built on DJango or Rails?

    Read the article

  • Why is "Fixup" needed for Persistence Ignorant POCO's in EF 4?

    - by Eric J.
    One of the much-anticipated features of Entity Framework 4 is the ability to use POCO (Plain Old CLR Objects) in a Persistence Ignorant manner (i.e. they don't "know" that they are being persisted with Entity Framework vs. some other mechanism). I'm trying to wrap my head around why it's necessary to perform association fixups and use FixupCollection in my "plain" business object. That requirement seems to imply that the business object can't be completely ignorant of the persistence mechanism after all (in fact the word "fixup" sounds like something needs to be fixed/altered to work with the chosen persistence mechanism). Specifically I'm referring to the Association Fixup region that's generated by the ADO.NET POCO Entity Generator, e.g.: #region Association Fixup private void FixupImportFile(ImportFile previousValue) { if (previousValue != null && previousValue.Participants.Contains(this)) { previousValue.Participants.Remove(this); } if (ImportFile != null) { if (!ImportFile.Participants.Contains(this)) { ImportFile.Participants.Add(this); } if (ImportFileId != ImportFile.Id) { ImportFileId = ImportFile.Id; } } } #endregion as well as the use of FixupCollection. Other common persistence-ignorant ORMs don't have similar restrictions. Is this due to fundamental design decisions in EF? Is some level of non-ignorance here to stay even in later versions of EF? Is there a clever way to hide this persistence dependency from the POCO developer? How does this work out in practice, end-to-end? For example, I understand support was only recently added for ObservableCollection (which is needed for Silverlight and WPF). Are there gotchas in other software layers from the design requirements of EF-compatible POCO objects?

    Read the article

  • How do I keep a CALayer, sublayer of a CATiledLayer, from changing it's scale after a zoom ?

    - by David
    I have a CATiledLayer that is used to display a PDF page (this CATiledLayer is the layer type of my UIView which is a subview of a UIScrollView). I want to add overlay markers on this page. So I add a sublayer to my CATiledLayer. This sublayer again hosts the different marker's layers and acts as a grouping layer. So graphically, I have: (keep in mind that I have multiple markers which are CALayers also, this is ascii art after all) pdf page (CATiledLayer) ---------------------- | CALayer | | +---------+ | | | +----+ | | | | |mker| | | | | +----+ | | | +---------+ | | | ---------------------- I have set up the canonical drawLayer:inContext: in my view for drawing the pdf. When I zoom to have more detail, the pdf gets rendered correctly, but the markers get scaled. No matter what I do to the bounds of the CALayer, my markers always become bigger and appear jagged. I would like to have the markers always the same size, as when they were initialized and first shown when the view was drawn. Is this possible ? or am I using a wrong approach ? Should I do special drawing for my contained CALayer in the drawLAyer:inContext: message ? As you see, there are things that I am missing to resolve my problem. Thank you for any help you provide.

    Read the article

  • How to serve tiff WMS imagery through GeoServer

    - by mikem419
    Ok so I am new to the GeoServer/database world. I am a student intern and I have been given the task of setting up a WMS using GeoServer. I have never done any database work before this so bare with me if my questions leave out important information. I am using GeoServer 2.0.1 in standalone mode (downloaded using Jetty) with PostgreSQL 8.4 installed. I went through nyc_roads and nyc_buildings install demo in the GeoServer documentation but I still do not understand how I should go about serving up some test images. I noticed that the nyc_roads setup included a .sql file that was responsible for setting up the nyc_buildings database. I do not know how/where this file was generated. Our test images are .tiff and .jpeg. I have successfully been able to do a WMS call on the local GeoServer machine, and have opened the included demo imagery. I now wish to add these .tiff and .jpeg images to GeoServer and access them through WMS. I have tried copying the images to the GeoServer data directory,adding a new data store, and layers, but I always receive an error regarding the "input stream." Again I am very sorry if I am leaving out allot of vital information, this is as much as I know. thanks!

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • multithreading in c#

    - by Lalit Dhake
    Hi, I have console application. In that i have some process that fetch the data from database through different layers ( business and Data access). stores the fetched data in respective objects. Like if data is fetched for student then this data will store (assigned ) to Student object. same for school. and them a delegate call the certain method that generates outputs as per requirement. This process will execute many times say 10 times. Ok? I want to run simultaneously this process. not one will start, it will finish and then second will start. I want after starting 1'st process, just 2'nd , 3rd....10'th must be start. Means it should be multithreading. how can i achieve this ? is that will give me error while connection with data base open and close ? I have tried this concept . but when thread 1'st is starting then data will fetched for thread 1 will stored in its respective (student , school) objects. ok? when simultaneous 2'nd thread starts , but the data is changing of 1'st object ,while control flowing in program. What have to do?

    Read the article

  • Conditional Drag and Drop Operations in Flex/AS3 Tree

    - by user163757
    Good day everyone. I am currently working with a hierarchical tree structure in AS3/Flex, and want to enable drag and drop capabilities under certain conditions: Only parent/top level nodes can be moved Parent/top level nodes must remain at this level; they can not be moved to child nodes of other parent nodes Using the dragEnter event of the tree, I am able to handle condition 1 easily. private function onDragEnter(event:DragEvent):void { // only parent nodes (map layers) are moveable event.preventDefault(); if(toc.selectedItem.hasOwnProperty("layer")) DragManager.acceptDragDrop(event.target as UIComponent); else DragManager.showFeedback(DragManager.NONE); } Handling the second condition is proving to be a bit more difficult. I am pretty sure the dragOver event is the place for logic. I have been experimenting with calculateDropIndex, but that always gives me the index of the parent node, which doesn't help check if the potential drop location is acceptable or not. Below is some pseudo code of what I am looking to accomplish. private function onDragOver(e:DragEvent):void { // if potential drop location has parents // dont allow drop // else // allow drop } Can anyone provide advice how to implement this?

    Read the article

  • VB.Net Custom Object Master-Detail Data Binding

    - by clawson
    Since beginning to use VB.Net some years ago I have become slowly familiar with using the data binding features of .Net, however I often find my self bewildered by it's behavior and instead of discover the correct way it should work I find some dirty work around to suit my needs and continue on. Needless to say my problems continue to arise. I am using Custom Objects as the Data Sources for by controls and often entire forms. I find it frustrating to separate business logic and the graphical interface. (That could be a new question entirely.) So for a lot of objects I generate a form which has the DataBindingSource for the object. When I create each from using the New Constructor I explicitly pass to it the object to which it should be bound, and then set this passed object as the DataSource of the BindingSource. (That's a mouthful!) Now the Master object (say, bound to each form) often contains a List of objects which I like to have displayed in a DataGridView. I (sometimes) create and modify these child objects in their own form (again creating a databind the same way as the master form) but when I add them to the List in the master object the DataGridView won't update with the new items. So my question really has a few layers: How can I easily/efficiently/correctly update this DataGridView with the list of Detail objects when I add them to the list of the Master object. Is this approach to DataBinding good/viable. What's the best way to separate business logic from graphical interface. Thanks for the help!

    Read the article

  • Feasibility of using Silverlight for web and windows client with common code base for data intensive

    - by Kabeer
    Hello. Recently in a conversation, someone suggested me to make use of Silverlight if I am targeting a web client and a windows client for the same application. This will cut down my effort for supporting the contrast in both presentation layers. Mine is a product, that will be deployed in enterprises. Both web and windows clients are desirable. With the above context, I have few queries: Is it advisable to adopt the recommended approach and whether this approach is becoming a trend? Besides, some configuration & deployment tweaking, will this significantly reduce effort on the presentation layer? Is there a possibility that my future prospects (for this product) will resist Silverlight footprint? Will I be able to make use of the ASP.Net MVC pattern? Will there be any performance implication for the web client? Will Silverlight support incremental load of controls? If my back-end includes SSRS, will I be able to harness all its front end features with Silverlight? Will I be able to support additional devices with same code base in future? Mine is a very data intensive application from both, data entry and reporting perspective. Is it advisable to use 3rd party controls (like Telerik) for improved user experience and developer productivity? Are their any professional quality open source Silverlight controls (library) available? Further, I seek information of best practices in the context I shared above.

    Read the article

  • Time to start returning IQueryable<T> instead of IList<T> to my Web UI / Web API Layer?

    - by JohnnyO
    I've got a multi-layer application that starts with the repository pattern for all data access and it returns IQueryable to the Services layer. The Services layer, which includes all of the business logic, returns IList to the Controllers (note: I'm using ASP.NET MVC for the UI layer). The benefit of returning IQueryable in the data access layer is that it allows my repositories to be extremely simple and the database queries to be deferred. However, I'm triggering the database queries in my services layer so that my unit tests is more reliable and I don't give flexibility to the Controllers to reshape my queries. However, I've recently encountered several situations where deferring the execution of queries down to the Controllers would have been significantly more performant because the Controllers had to do some projections on the data that was UI specific. Additionally, with the emergence of things like oData, I was starting to wonder if end points (e.g. web UI or web apis) should be working directly with IQueryable. What are your thoughts? Is it time to start returning IQueryable from the services layer to the UI layer? Or stick with IList? This thread here: http://stackoverflow.com/questions/718624/to-return-iqueryablet-or-not-return-iqueryablet seems to vouch for returning IList to the UI layers, but I was wondering if things are changing because of new emerging technologies and techniques.

    Read the article

  • Extract <name> attribute from KML

    - by Ozaki
    I am using OpenLayers for a mapping service. In which I have several KML layers that are using KML feeds from server to populate data on the map. It currently plots: images / points / vector lines & shapes. But on these points it will not add a label with the value of the for the placemark in the KML. What I have currently tried is: ////////////////////KML Feed for * Layer// var surveylinelayer = new OpenLayers.Layer.Vector("First KML Layer", { projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: firstKMLURL, format: new OpenLayers.Format.KML({ extractStyles: true, extractAttributes: true }) }), styleMap: new OpenLayers.StyleMap({ "default": KMLStyle }) }); then the Style as follows: var KMLStyle = new OpenLayers.Style({ //label: "${name}", // This method will display nothing fillOpacity: 1, pointRadius: 10, fontColor: "#7E3C1C", fontSize: "13px", fontFamily: "Courier New, monospace", fontWeight: "strong", labelXOffset: "0", labelYOffset: "-15" }, { //dynamic label context: { label: function(feature) { return "Feature Name: " + feature.attributes.name; // also displays nothing } } }); Example of the KML: <Placemark> <name>POI1</name> <Style> <LabelStyle> <color>ffffffff</color> </LabelStyle> </Style> <Point> <coordinates>0.000,0.000</coordinates> </Point> </Placemark> When debugging I just hit "feature is undefined" and am unsure why it would be undefined in this instance?

    Read the article

  • Dropdown menu disappears in IE7

    - by Justine
    A weird problem with a dropdown menu in IE7: http://screenr.com/SNM The dropdown disappears when the mouse moves to a part that hovers above other layers. The HTML structure looks like this: <div class="header"> <ul class="nav> <li><a href="">item</a> <ul><li><a href="">sub-item</a></li></ul> </li> </ul> </div><!-- /header--> <div class="featured"></div> <div class="content"></div> The sub-menu is positioned absolutely and has visibility:hidden, then it's set to visible using jQuery, like so: $(".header ul.nav li").hover(function(){ $(this).addClass("hover"); $('ul:first',this).css('visibility', 'visible'); }, function(){ $(this).removeClass("hover"); $('ul:first',this).css('visibility', 'hidden'); }); I had a problem with the dropdown hiding under other content in IE7, fixed easily by giving the z-index to its parent and other divs: *:first-child+html .header { position: relative; z-index: 2 !important; } *:first-child+html .content, *:first-child+html .main, *:first-child+html .primary *:first-child+html .featured { position: relative; z-index: 1 !important; } Now, I have no idea why the menu disappears when hovered over other divs, you can view the site live here: http://dev.gentlecode.net/ama/ubezpieczenia.html I would love any help, been staring at this code for ages now without any solution. I guess it's just me tunnel visioning already... Thanks in advance for any help!

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • Design: Website calling a webservice on the same machine

    - by Chris L
    More of a design/conceptual question. At work the decision was made to have our data access layer be called through webservices. So our website would call the webservices for any/all data to and from the database. Both the website & the webservices will be on the same machine(so no trip across the wire), but the database is on a separate machine(so that would require a trip across the wire regardless). This is all in-house, the website, webservice, and database are all within the same company(AFAIK, the webservices won't be reused by another other party). To the best of my knowledge: the website will open a port to the webservices, and the webservices will in turn open another port and go across the wire to the database server to get/submit the data. The trip across the wire can't be avoided, but I'm concerned about the webservices standing in the middle. I do agree there needs to be distinct layers between the functionality(such as business layer, data access layer, etc...), but this seems overly complex to me. I'm also sensing there will be some performance problems down the line. Seems to me it would be better to have the (DAL)assemblies referenced directly within the solution, thus negating the first port to port connection. Any thoughts(or links) both for and against this idea would be appreciated P.S. We're a .NET shop(migrating from vb to C# 3.5)

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >