Search Results

Search found 7442 results on 298 pages for 'dynamic allocation'.

Page 276/298 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • UITableViewCell rounded corners and clip subviews

    - by Brenden
    I can't find anything anywhere(search engines, docs, here, etc) that shows how to create rounded corners (esp. in a grouped table view) on an element that also clips the subviews. I have code that properly creates a rounded rectangle out of a path with 4 arcs(the rounded corners) that has been tested in the drawRect: method in my subclassed uitableviewcell. The issue is that the subviews, which happen to be uibuttons with their internal uiimageviews, do no obey the CGContextClip() that the uitableviewcell obeys. Here is the code: - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGFloat radius = 12; CGFloat width = CGRectGetWidth(rect); CGFloat height = CGRectGetHeight(rect); // Make sure corner radius isn't larger than half the shorter side if (radius > width/2.0) radius = width/2.0; if (radius > height/2.0) radius = height/2.0; CGFloat minx = CGRectGetMinX(rect) + 10; CGFloat midx = CGRectGetMidX(rect); CGFloat maxx = CGRectGetMaxX(rect) - 10; CGFloat miny = CGRectGetMinY(rect); CGFloat midy = CGRectGetMidY(rect); CGFloat maxy = CGRectGetMaxY(rect); [[UIColor greenColor] set]; CGContextBeginPath(context); CGContextMoveToPoint(context, minx, midy); CGContextAddArcToPoint(context, minx, miny, midx, miny, radius); CGContextAddArcToPoint(context, maxx, miny, maxx, midy, radius); CGContextAddArcToPoint(context, maxx, maxy, midx, maxy, radius); CGContextAddArcToPoint(context, minx, maxy, minx, midy, radius); CGContextClip(context); CGContextFillRect(context, rect); [super drawRect:rect]; } Because this specific case is static(only shows in 1 specific row of buttons), I can edit the images being used for the buttons to get the desired effect. HOWEVER, I have another case that is dynamic. Specifically, a grouped table with lots of database-driven results that will show photos that may be in the first or last row with rounded corners and thus needs to be clipped). So, is it possible to create a CGContextClip() that also clips the subviews? If so, how?

    Read the article

  • Team Foundation Server Setup/Access

    - by Angel Brighteyes
    What I need: A TFS 2010 Setup that allows 2 application developers to access the TFS from remote locations. How it is setup: Server 2008 Standard 2g Ram 300g HD space SharePoint Server 2007, using SQL Server 2005 SQL Server 2008 Standard Team Foundation Server 2010 IIS 7 Sharepoint Bindings: TFS.DynAccount.Me:80; TFS:80 TFS Bindings: TFS.DynAccount.Me:8080; TFS:8080 Using DynDNS service to account for the dynamic ip address being used, this is a requirement for the moment until I can get a better isp package. Access using Local Accounts Server is not setup on a domain, or as a domain. Consequently I did not setup AD services. Problem: When logged into TFS using my credentials TFS\AdminUser through the DynDNS account TFS.DynAccount.Me I recieve the 'Red X of Death' on the Documents and Reports folder. When logged into the TFS through the local peer to peer network using the same credentials TFS\AdminUser I do not receive the 'Red X of Death' problem. Further Troubleshooting: When users 'Right Click' the 'TeamProject1' Click 'Show Project Portal' it tries to take them to http://TFS:8080 instead of http://TFS.DynAccount.Me:8080, which doing further research I am assuming that it is because team foundation server was setup with a local name of TFS instead of 'TFS.DynAccount.Me' as specified here in Visual Studio Magazines: The Red X of Death. Users can Access the Team Portal for SharePoint via http://TFS.DynAccount.Me/TeamCollection/TeamProject so it is not like we are dead in the water or anything. However, as most employees/staff are prone to do, they have expressed a great distaste for having to do it this way and just be patient until the current project is finished since we are under a very strict deadline. Is there a way to set this up differently, or change some settings someplace, reinstall it, point a CName record for our domain website to the DynAccount (e.g. TFS.OurDomain.com points to TFS.DynAccount.Me, which consequently does allow access to the http site without issues), or something. I really don't feel like after all the time and effort I have spent into, first the cost, second the bloody install, third learning SharePoint well enough, fourth the hours into days spent on this, fifth more troubleshooting, sixth employee headaches to just let it lay where it is at. I figure in my spare/off time I would keep trying to get this to work. So I really appreciate any help any one can give me. I know this is probably something really stupid simple that I will 'Face Palm' over, but at the moment the stress and frustration just has me beat. Thank you again, this community has always been a great help.

    Read the article

  • jQuery unbinding click event when maximum number of children are displayed

    - by RyanP13
    I have a personal details form that alows you to enter a certain number of dependants which is determined by the JSP application. The first dependant is visible and the user has the option to add dependants up to the maximum number. All other dependants are hidden by default and are displayed when a user clicks the 'Add another dependant button'. When the maximum number of dependants has been reached the button is greyed out and a message is generated via jQuery and displayed to tell the user exactly this. The issue i am having is when the maximum number of dependants has been reached the message is displayed but then the user can click the button to add more dependants and the message keeps on generating. I thought unbinding the click event would sort this but it seems to still be able to generate a second message. Here is the function i wrote to generate the message: // Dependant message function function maxDependMsg(msgElement) { // number of children can change per product, needs to be dynamic // count number of dependants in HTML var $dependLength = $("div.dependant").length; // add class maxAdd to grey out Button // create maximum dependants message and display, will not be created if JS turned off $(msgElement) .addClass("maxAdd") .after($('<p>') .addClass("maxMsg") .append("The selected web policy does not offer cover for more than " + $dependLength + " children, please contact our customer advisers if you wish discuss alternative policies available.")); } There is a hyperlink with a click event attached like so: $("a.add").click(function(){ // Show the next hidden table on clicking add child button $(this).closest('form').find('div.dependant:hidden:first').show(); // Get the number of hidden tables var $hiddenChildren = $('div.dependant:hidden').length; if ($hiddenChildren == 0) { // save visible state of system message $.cookies.set('cpqbMaxDependantMsg', 'visible'); // show system message that you can't add anymore dependants than what is on page maxDependMsg("a.add"); $(this).unbind("click"); } // set a cookie for the visible state of all child tables $('div.dependant').each(function(){ var $childCount = $(this).index('div.dependant'); if ($(this).is(':visible')) { $.cookies.set('cpqbTableStatus' + $childCount, 'visible'); } else { $.cookies.set('cpqbTableStatus' + $childCount, 'hidden'); } }); return false; }); All of the cookies code is for state saving when users are going back and forward through the process.

    Read the article

  • NHibernate: No persister error

    - by Mike
    Hello, In my quest to further my knowledge, I'm trying to get get NHibernate running. I have the following structure to my solution Core Class Library Project Infrastructure Class Library Project MVC Application Project Test Project In my Core project I have created the following entity: using System; namespace Core.Domain.Model { public class Category { public virtual Guid Id { get; set; } public virtual string Name { get; set; } } } In my Infrastructure Project I have the following mapping: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Core.Domain.Model" assembly="Core"> <class name="Category" table="Categories" dynamic-update="true"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="Name" length="100"/> </class> </hibernate-mapping> With the following config file: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property> <property name="connection.connection_string">server=xxxx;database=xxxx;Integrated Security=true;</property> <property name="show_sql">true</property> <property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property> <property name="cache.use_query_cache">false</property> <property name="adonet.batch_size">100</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> <mapping assembly="Infrastructure" /> </session-factory> </hibernate-configuration> In my test project, I have the following Test [TestMethod] [DeploymentItem("hibernate.cfg.xml")] public void CanCreateCategory() { IRepository<Category> repo = new CategoryRepository(); Category category = new Category(); category.Name = "ASP.NET"; repo.Save(category); } I get the following error when I try to run the test: Test method Volunteer.Tests.CategoryTests.CanCreateCategory threw exception: NHibernate.MappingException: No persister for: Core.Domain.Model.Category. Any help would be greatly appreciated. I do have the cfg build action set to embedded resource. Thanks!

    Read the article

  • Entity Attribute Value Database vs. strict Relational Model Ecommerce question

    - by Dr. Zim
    It is safe to say that the EAV/CR database model is bad. That said, Question: What database model, technique, or pattern should be used to deal with "classes" of attributes describing e-commerce products which can be changed at run time? In a good E-commerce database, you will store classes of options (like TV resolution then have a resolution for each TV, but the next product may not be a TV and not have "TV resolution"). How do you store them, search efficiently, and allow your users to setup product types with variable fields describing their products? If the search engine finds that customers typically search for TVs based on console depth, you could add console depth to your fields, then add a single depth for each tv product type at run time. There is a nice common feature among good e-commerce apps where they show a set of products, then have "drill down" side menus where you can see "TV Resolution" as a header, and the top five most common TV Resolutions for the found set. You click one and it only shows TVs of that resolution, allowing you to further drill down by selecting other categories on the side menu. These options would be the dynamic product attributes added at run time. Further discussion: So long story short, are there any links out on the Internet or model descriptions that could "academically" fix the following setup? I thank Noel Kennedy for suggesting a category table, but the need may be greater than that. I describe it a different way below, trying to highlight the significance. I may need a viewpoint correction to solve the problem, or I may need to go deeper in to the EAV/CR. Love the positive response to the EAV/CR model. My fellow developers all say what Jeffrey Kemp touched on below: "new entities must be modeled and designed by a professional" (taken out of context, read his response below). The problem is: entities add and remove attributes weekly (search keywords dictate future attributes) new entities arrive weekly (products are assembled from parts) old entities go away weekly (archived, less popular, seasonal) The customer wants to add attributes to the products for two reasons: department / keyword search / comparison chart between like products consumer product configuration before checkout The attributes must have significance, not just a keyword search. If they want to compare all cakes that have a "whipped cream frosting", they can click cakes, click birthday theme, click whipped cream frosting, then check all cakes that are interesting knowing they all have whipped cream frosting. This is not specific to cakes, just an example.

    Read the article

  • Perl DBD::DB2 installation failed

    - by prabhu
    Hi, We dont have root access in our local machine. I installed DBI package first and then installed DBD package. I got the below error first, In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory Then I included the DBI path in the Makefile and then I am getting the below error. DB2.xs: In function `XS_DBD__DB2__db_disconnect': DB2.xs:128: error: structure has no member named `_old_cached_kids' DB2.xs:129: error: structure has no member named `_old_cached_kids' DB2.xs:130: error: structure has no member named `_old_cached_kids' DB2.xs: In function `XS_DBD__DB2__db_DESTROY': DB2.xs:192: error: structure has no member named `_old_cached_kids' DB2.xs:193: error: structure has no member named `_old_cached_kids' DB2.xs:194: error: structure has no member named `_old_cached_kids' The versions I am trying to install are DBI-1.610_90.tar.gz DBD-DB2-1.78.tar.gz I am using perl Makefile.PL PREFIX=/home/prabhu/perl_pm/lib The output for perl -V is as follows: Summary of my perl5 (revision 5 version 8 subversion 5) configuration: Platform: osname=linux, osvers=2.4.21-27.0.2.elsmp, archname=i386-linux-thread-multi uname='linux decompose.build.redhat.com 2.4.21-27.0.2.elsmp #1 smp wed jan 12 23:35:44 est 2005 i686 i686 i386 gnulinux ' config_args='-des -Doptimize=-O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -Dversion=5.8.5 -Dmyhostname=localhost -Dperladmin=root@localhost -Dcc=gcc -Dcf_by=Red Hat, Inc. -Dinstallprefix=/usr -Dprefix=/usr -Darchname=i386-linux -Dvendorprefix=/usr -Dsiteprefix=/usr -Duseshrplib -Dusethreads -Duseithreads -Duselargefiles -Dd_dosuid -Dd_semctl_semun -Di_db -Ui_ndbm -Di_gdbm -Di_shadow -Di_syslog -Dman3ext=3pm -Duseperlio -Dinstallusrbinperl -Ubincompat5005 -Uversiononly -Dpager=/usr/bin/less -isr -Dinc_version_list=5.8.4 5.8.3 5.8.2 5.8.1 5.8.0' hint=recommended, useposix=true, d_sigaction=define usethreads=define use5005threads=undef useithreads=define usemultiplicity=define useperlio=define d_sfio=undef uselargefiles=define usesocks=undef use64bitint=undef use64bitall=undef uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='gcc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm', optimize='-O2 -g -pipe -m32 -march=i386 -mtune=pentium4', cppflags='-D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -I/usr/include/gdbm' ccversion='', gccversion='3.4.4 20050721 (Red Hat 3.4.4-2)', gccosandvers='' intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12 ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=4, prototype=define Linker and Libraries: ld='gcc', ldflags =' -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib libs=-lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lpthread -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lpthread -lc libc=/lib/libc-2.3.4.so, so=so, useshrplib=true, libperl=libperl.so gnulibc_version='2.3.4' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E -Wl,-rpath,/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE' cccdlflags='-fPIC', lddlflags='-shared -L/usr/local/lib' Characteristics of this binary (from libperl): Compile-time options: DEBUGGING MULTIPLICITY USE_ITHREADS USE_LARGE_FILES PERL_IMPLICIT_CONTEXT Built under linux Compiled at Aug 2 2005 04:48:47 %ENV: PERL5LIB=":/opt/india/dev/perl/XML-XPath-1.13/lib/perl5:/opt/india/dev/perl/XML-XPath- 1.13/lib/perl5/site_perl:/opt/india/dev/perl/XML-XPath-1.13/lib/perl5:/opt/india/dev/perl/XML-XPath-1.13/lib/perl5/site_perl" Could anyone help me to resolve this issue? Appreciate help in advance. Prabhu

    Read the article

  • asp.net CustomValidator never fires OnServerValidate

    - by Bryce Fischer
    I have the following ASP page: <asp:Content ID="Content2" ContentPlaceHolderID="ShellContent" runat="server"> <form runat="server" id="AddNewNoteForm" method="post""> <fieldset id="NoteContainer"> <legend>Add New Note</legend> <asp:ValidationSummary ID="ValidationSummary1" runat="server" /> <div class="ctrlHolder"> <asp:Label ID="LabelNoteDate" runat="server" Text="Note Date" AssociatedControlID="NoteDateTextBox"></asp:Label> <asp:TextBox ID="NoteDateTextBox" runat="server" class="textInput" CausesValidation="True" ></asp:TextBox> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="CustomValidator" ControlToValidate="NoteDateTextBox" OnServerValidate="CustomValidator1_ServerValidate" Display="Dynamic" >*</asp:CustomValidator> </div> <div class="ctrlHolder"> <asp:Label ID="LabelNoteText" AssociatedControlID="NoteTextTextBox" runat="server" Text="Note"></asp:Label> <asp:TextBox ID="NoteTextTextBox" runat="server" Height="102px" TextMode="MultiLine" class="textInput" ></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator2" runat="server" ErrorMessage="Note Text is Required" ControlToValidate="NoteTextTextBox">*</asp:RequiredFieldValidator> </div> <div class="buttonHolder"> <asp:Button ID="OkButton" runat="server" Text="Add New Note" CssClass="primaryAction" onclick="OkButton_Click"/> <asp:HyperLink ID="HyperLink1" runat="server">Cancel</asp:HyperLink> </div> </fieldset> </form> </asp:Content> and the following code behind for the CustomValidator1_ServerValidate() method: protected void CustomValidator1_ServerValidate(object source, ServerValidateEventArgs args) { if (string.IsNullOrEmpty(args.Value.Trim())) { args.IsValid = false; CustomValidator1.ErrorMessage = "Note Date is Required!"; return; } DateTime testDate; if (DateTime.TryParse(args.Value, out testDate)) { args.IsValid = true; CustomValidator1.ErrorMessage = "Invalid Date!"; } } It never seems to fail validation no matter what I put in the text box... Should mention this is ASP.NET 2.0

    Read the article

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • VBA - Access 03 - Iterating through a list box, with an if statement to evaluate

    - by Justin
    So I have a one list box with values like DeptA, DeptB, DeptC & DeptD. I have a method that causes these to automatically populate in this list box if they are applicable. So in other words, if they populate in this list box, I want the resulting logic to say they are "Yes" in a boolean field in the table. So to accomplish this I am trying to use this example of iteration to cycle through the list box first of all, and it works great: dim i as integer dim myval as string For i = o to me.lstResults.listcount - 1 myVal = lstResults.itemdata(i) Next i if i debug.print myval, i get the list of data items that i want from the list box. so now i am trying to evaluate that list so that I can have an UPDATE SQL statement to update the table as i need it to be done. so, i know this is a mistake, but this is what i tried to do (giving it as an example so that you can see what i am trying to get to here) dim sql as string dim i as integer dim myval as string dim db as database sql = "UPDATE tblMain SET " for i = 0 to me.lstResults.listcount - 1 myval = lstResults.itemdata(i) If MyVal = "DeptA" Then sql = sql & "DeptA = Yes" ElseIF myval = "DeptB" Then sql = sql & "DeptB = Yes" ElseIf MyVal = "DeptC" Then sql = sql & "DeptC = Yes" ElseIf MyVal = "DeptD" Then sql = sql & "DeptD = Yes" End If Next i debug.print (sql) sql = sql & ";" set db= currentdb db.execute(sql) msgbox "Good Luck!" So you can see why this is going to cause problems because the listbox that these values (DeptA, DeptB, etc) automatically populate in are dynamic....there is rarely one value in the listbox, and the list of values changes per OrderID (what the form I am using this on populates information for in the first place; unique instance). I am looking for something that will evaluate this list one at a time (i.e. iterate through the list of values, and look for "DeptA", and if it is found add yes to the SQL string, and if it not add no to the SQL string, then march on to the next iteration). Even though the listbox populates values dynamically, they are set values, meaning i know what could end up in it. Thanks for any help, Justin

    Read the article

  • Does weak typing offer any advantages?

    - by sub
    Don't confuse this with static vs. dynamic typing! You all know JavaScripts/PHPs infamous type systems: PHP example: echo "123abc"+2; // 125 - the reason for this is explained // in the PHP docs but still: This hurts echo "4"+1; // 5 - Oh please echo "ABC"*5; // 0 - WTF // That's too much, seriously now. // This here might be actually a use for weak typing, but no - // it has to output garbage. JavaScript example: // A good old JavaScript, maybe you'll do better? alert("4"+1); // 51 - Oh come on. alert("abc"*3); // NaN - What the... // Have your creators ever heard of the word "consistence"? Python example: # Python's type system is actually a mix # It spits errors on senseless things like the first example below AND # allows intelligent actions like the second example. >>> print("abc"+1) Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> print("abc"+1) TypeError: Can't convert 'int' object to str implicitly >>> print("abc"*5) abcabcabcabcabc Ruby example: puts 4+"1" // Type error - as supposed puts "abc"*4 // abcabcabcabc - makes sense After these examples it should be clear that PHP/JavaScript probably have the most inconsistent type systems out there. This is a fact and really not subjective. Now when having a closer look at the type systems of Ruby and Python it seems like they are having a much more intelligent and consistent type system. I think these examples weren't really necessary as we all know that PHP/JavaScript have a weak and Python/Ruby have a strong type system. I just wanted to mention why I'm asking this. Now I have two questions: When looking at those examples, what are the advantages of PHPs and JavaScripts type systems? I can only find downsides: They are inconsistent and I think we know that this is not good Types conversions are hardly controllable Bugs are more likely to happen and much harder to spot Do you prefer one of the both systems? Why? Personally I have worked with PHP, JavaScript and Python so far and must say that Pythons type system has really only advantages over PHPs and JavaScripts. Does anybody here not think so? Why then?

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • How do you make a CSS-defined table-cell scroll?

    - by Giffyguy
    I want to be able to set the height of the table, and force the cells to scroll individually if they are larger than the table. Consider the following code: (see it in action here) <div style="display: table; position: absolute; width: 25%; height: 80%; min-height: 80%; max-height: 80%; left: 0%; top: 10%; right: 75%; bottom: 10%; border: solid 1px black;"> <div style="display: table-row;"> <div style="display: table-cell; border: solid 1px blue;"> {Some dynamic text content}<br/> This cell should shrink to fit it's contents. </div> </div> <div style="display: table-row;"> <div style="display: table-cell; border: solid 1px red; overflow: scroll;"> This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. </div> </div> </div> If you open this code (in IE8, in my case) you'll notice that the second cell fits in the table nicely when the browser is maximized. In theory, when you shrink the browser (forcing the table to shrink as well), a vertical scrollbar should appear INSIDE the second cell when the table becomes too small to fit all of the content. But in reality, the table just grows vertically, beyond the bounds set by the CSS height attribute(s). Hopefully I've explained this scenario adequately... Does anyone know how I can get this to work?

    Read the article

  • How do you make a CSS-defined table-cell scroll?

    - by Giffyguy
    I want to be able to set the height of the table, and force the cells to scroll individually if they are larger than the table. Consider the following code: (see it in action here) <div style="display: table; position: absolute; width: 25%; height: 80%; min-height: 80%; max-height: 80%; left: 0%; top: 10%; right: 75%; bottom: 10%; border: solid 1px black;"> <div style="display: table-row;"> <div style="display: table-cell; border: solid 1px blue;"> {Some dynamic text content}<br/> This cell should shrink to fit it's contents. </div> </div> <div style="display: table-row;"> <div style="display: table-cell; border: solid 1px red; overflow: scroll;"> This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. This should only take up the remainder of the table's vertical space. </div> </div> </div> If you open this code (in IE8, in my case) you'll notice that the second cell fits in the table nicely when the browser is maximized. In theory, when you shrink the browser (forcing the table to shrink as well), a vertical scrollbar should appear INSIDE the second cell when the table becomes too small to fit all of the content. But in reality, the table just grows vertically, beyond the bounds set by the CSS height attribute(s). Hopefully I've explained this scenario adequately... Does anyone know how I can get this to work?

    Read the article

  • Creating an XML sitemap with PHP

    - by iMaster
    I'm trying to create a sitemap that will automatically update. I've done something similiar with my RSS feed, but this sitemap refuses to work. You can view it live at http://designdeluge.com/sitemap.xml I think the main problem is that its not recognizing the PHP code. Here's the full source: <?php header('Content-type: text/xml'); ?> <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.google.com/schemas/sitemap/0.84" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.google.com/schemas/sitemap/0.84 http://www.google.com/schemas/sitemap/0.84/sitemap.xsd"> <url> <loc>http://www.designdeluge.com/</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>1.00</priority> </url> <url> <loc>http://www.designdeluge.com/archives</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>0.5</priority> </url> <url> <loc>http://www.designdeluge.com/about.php</loc> <lastmod>2010-04-29</lastmod> <changefreq>daily</changefreq> <priority>0.5</priority> </url> <?php include 'connection.php'; $entries = mysql_query("SELECT * FROM Entries ORDER BY timestamp DESC"); while($row = mysql_fetch_array($entries)) { $title = stripslashes($row['title']); $date = date("Y-m-d", strtotime($row['timestamp'])); ?> <url> <loc>http://www.designdeluge.com/<?php echo $row['title']; ?></loc> <lastmod><?php echo $date; ?></lastmod> <changefreq>none</changefreq> <priority>0.5</priority> </url> <?php } ?> </urlset> The problem is that the dynamic URL's (e.g. the ones pulled from the DB) aren't being generated and the sitemap won't validate. Thanks!

    Read the article

  • SignalR recording when a Web Page has closed

    - by Benjamin Rogers
    I am using MassTransit request and response with SignalR. The web site makes a request to a windows service that creates a file. When the file has been created the windows service will send a response message back to the web site. The web site will open the file and make it available for the users to see. I want to handle the scenario where the user closes the web page before the file is created. In that case I want the created file to be emailed to them. Regardless of whether the user has closed the web page or not, the message handler for the response message will be run. What I want to be able to do is have some way of knowing within the response message handler that the web page has been closed. This is what I have done already. It doesnt work but it does illustrate my thinking. On the web page I have $(window).unload(function () { if (event.clientY < 0) { // $.connection.hub.stop(); $.connection.exportcreate.setIsDisconnected(); } }); exportcreate is my Hub name. In setIsDisconnected would I set a property on Caller? Lets say I successfully set a property to indicate that the web page has been closed. How do I find out that value in the response message handler. This is what it does now protected void BasicResponseHandler(BasicResponse message) { string groupName = CorrelationIdGroupName(message.CorrelationId); GetClients()[groupName].display(message.ExportGuid); } private static dynamic GetClients() { return AspNetHost.DependencyResolver.Resolve<IConnectionManager>().GetClients<ExportCreateHub>(); } I am using the message correlation id as a group. Now for me the ExportGuid on the message is very important. That is used to identify the file. So if I am going to email the created file I have to do it within the response handler because I need the ExportGuid value. If I did store a value on Caller in my hub for the web page close, how would I access it in the response handler. Just in case you need to know. display is defined on the web page as exportCreate.display = function (guid) { setTimeout(function () { top.location.href = 'GetExport.ashx?guid=' + guid; }, 500); }; GetExport.ashx opens the file and returns it as a response. Thank you, Regards Ben

    Read the article

  • "RFC 2833 RTP Event" Consecutive Events and the E "End" Bit

    - by brian_d
    Hello, I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt When I do set the E "End" bit, but leave it as 0, I get the following behaviour: If for example keys 7874556332111111145855885#3 were pressed, then ALL events would be sent and show up in a program like wireshark, however only 87456321458585#3 would sound. So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound. In section 3.9, figure 2 of the above linked document, they give a 911 example. Here all but the last event have the E bit set. When I set the bit for all numbers, I never get an event to sound. I have thought of a couple possible thing but do not know if they are the reason: 1) figure 2 shows payload types of 96 and 97 sent. I have not nor know how to exactly. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively" 2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this? I have also fiddled around with timestamp intervals and the RTP marker. Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas: 6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end) Real-Time Transport Protocol Stream setup by SDP (frame 6225) Setup frame: 6225 Setup Method: SDP 10.. .... = Version: RFC 1889 Version (2) ..0. .... = Padding: False ...0 .... = Extension: False .... 0000 = Contributing source identifiers count: 0 0... .... = Marker: False Payload type: telephone-event (101) Sequence number: 0 Extended sequence number: 65536 Timestamp: 0 Synchronization Source identifier: 0x15f27104 (368210180) RFC 2833 RTP Event Event ID: DTMF Pound # (11) 1... .... = End of Event: True .0.. .... = Reserved: False ..00 0000 = Volume: 0 Event Duration: 2048

    Read the article

  • Rail test case fixtures not loading

    - by Deano
    Rails appears to not be loading any fixtures for unit or functional tests. I have a simple 'products.yml' that parses and appears correct: ruby: title: Programming Ruby 1.9 description: Ruby is the fastest growing and most exciting dynamic language out there. If you need to get working programs delivered fast, you should add Ruby to your toolbox. price: 49.50 image_url: ruby.png My controller functional test begins with: require 'test_helper' class ProductsControllerTest < ActionController::TestCase fixtures :products setup do @product = products(:one) @update = { :title => 'Lorem Ipsum' , :description => 'Wibbles are fun!' , :image_url => 'lorem.jpg' , :price => 19.95 } end According to the book, Rails should "magically" load the fixtures (as my test_helper.rb has fixtures :all in it. I also added the explicit fixtures load (seen above). Yes Rails complains: user @ host ~/Dropbox/Rails/depot > rake test:functionals (in /Somewhere/Users/user/Dropbox/Rails/depot) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/functional/products_controller_test.rb" Loaded suite /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader Started EEEEEEE Finished in 0.062506 seconds. 1) Error: test_should_create_product(ProductsControllerTest): NoMethodError: undefined method `products' for ProductsControllerTest:Class /test/functional/products_controller_test.rb:7 2) Error: test_should_destroy_product(ProductsControllerTest): NoMethodError: undefined method `products' for ProductsControllerTest:Class /test/functional/products_controller_test.rb:7 ... I did come across the other Rails test fixture question http://stackoverflow.com/questions/1547634/rails-unit-testing-doesnt-load-fixtures, but that leads to a plugin issue (something to do with the order of loading fixtures). BTW, I am developing on Mac OS X 10.6 with Rail 2.3.5 and Ruby 1.8.7, no additional plugins (beyond the base install). Any pointers on how to debug, why the magic of Rails appears to be failing here? Is it a version problem? Can I trace code into the libraries and find the answer? There are so many "mixin" modules I can't find where the fixtures method really lives.

    Read the article

  • printing multi dimentional array

    - by Honey
    i have this multi dimentional array that i want to print into a table having each record/item go into its own row but it goes column wise. this is the output that im getting: http://mypetshopping.com/product.php ps: the value of $product will by dynamic based on what product is being viewed. <?php session_start(); ?> <table> <thead> <tr> <th>Name</th> <th>Hash</th> <th>Quantity</th> <th>Size</th> <th>Color</th> </tr> </thead> <tbody> <?php function addCart($product, $quantity, $size,$color) { $hash = md5($product); $_SESSION['cart'][$product]['name'] = $product; $_SESSION['cart'][$product]['hash'] = $hash; $_SESSION['cart'][$product]['quantity'] = $quantity; $_SESSION['cart'][$product]['size'] = $size; $_SESSION['cart'][$product]['color'] = $color; } addCart('Red Dress',1,'XL','red'); addCart('Blue Dress',1,'XL','blue'); addCart('Slippers',1,'XL','orange'); addCart('Green Hat',1,'XXXL','green'); $cart = $_SESSION['cart']; foreach($cart as $product => $array) { foreach($array as $key => $value) { ?> <tr> <td><?=$value;?></td> <td><?=$value;?></td> <td><?=$value;?></td> <td><?=$value;?></td> <td><?=$value;?></td> </tr> <?php } } ?>

    Read the article

  • problem with reload data from table view after come back from another view

    - by user129677
    I have a problem in my application. Any help will be greatly appreciated. Basically it is from view A to view B, and then come back from view B. In the view A, it has dynamic data loaded in from the database, and display on the table view. In this page, it also has the edit button, not on the navigation bar. When user tabs the edit button, it goes to the view B, which shows the pick view. And user can make any changes in here. Once that is done, user tabs the back button on the navigation bar, it saves the changes into the NSUserDefaults, goes back to the view A by pop the view B. When coming back to the view A, it should get the new data from the UIUserDefaults, and it did. I user NSLog to print out to the console and it shows the correct data. Also it should invoke the viewWillAppear: method to get the new data for the table view, but it didn't. It even did not call the tableView:numberOfRowsInSection: method. I place a NSLog statement inside this method but didn't print out in the console. as the result, the view A still has the old data. the only way to get the new data in the view A is to stop and start the application. both view A and view B are the subclass of UIViewController, with UITableViewDelegate and UITableViewDataSource. here is my code in the view A : - (void)viewWillAppear:(BOOL)animated { NSLog(@"enter in Schedule2ViewController ..."); // load in data from database, and store into NSArray object //[self.theTableView reloadData]; [self.theTableView setNeedsDisplay]; //[self.theTableView setNeedsLayout]; } in here, the "theTableView" is a UITableView variable. And I try all three cases of "reloadData", "setNeedsDisplay", and "setNeedsLayout", but didn't seem to work. in the view B, here is the method corresponding to the back button on the navigation bar. - (void)viewDidLoad { UIBarButtonItem *saveButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemSave target:self action:@selector(savePreference)]; self.navigationItem.leftBarButtonItem = saveButton; [saveButton release]; } - (IBAction) savePreference { NSLog(@"save preference."); // save data into the NSUSerDefaults [self.navigationController popViewControllerAnimated:YES]; } Am I doing in the right way? Or is there anything that I missed? Many thanks.

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • SQL Server stored procedures - update column based on variable name..?

    - by ClarkeyBoy
    Hi, I have a data driven site with many stored procedures. What I want to eventually be able to do is to say something like: For Each @variable in sproc inputs UPDATE @TableName SET @variable.toString = @variable Next I would like it to be able to accept any number of arguments. It will basically loop through all of the inputs and update the column with the name of the variable with the value of the variable - for example column "Name" would be updated with the value of @Name. I would like to basically have one stored procedure for updating and one for creating. However to do this I will need to be able to convert the actual name of a variable, not the value, to a string. Question 1: Is it possible to do this in T-SQL, and if so how? Question 2: Are there any major drawbacks to using something like this (like performance or CPU usage)? I know if a value is not valid then it will only prevent the update involving that variable and any subsequent ones, but all the data is validated in the vb.net code anyway so will always be valid on submitting to the database, and I will ensure that only variables where the column exists are able to be submitted. Many thanks in advance, Regards, Richard Clarke Edit: I know about using SQL strings and the risk of SQL injection attacks - I studied this a bit in my dissertation a few weeks ago. Basically the website uses an object oriented architecture. There are many classes - for example Product - which have many "Attributes" (I created my own class called Attribute, which has properties such as DataField, Name and Value where DataField is used to get or update data, Name is displayed on the administration frontend when creating or updating a Product and the Value, which may be displayed on the customer frontend, is set by the administrator. DataField is the field I will be using in the "UPDATE Blah SET @Field = @Value". I know this is probably confusing but its really complicated to explain - I have a really good understanding of the entire system in my head but I cant put it into words easily. Basically the structure is set up such that no user will be able to change the value of DataField or Name, but they can change Value. I think if I were to use dynamic parameterised SQL strings there will therefore be no risk of SQL injection attacks. I mean basically loop through all the attributes so that it ends up like: UPDATE Products SET [Name] = '@Name', Description = '@Description', Display = @Display Then loop through all the attributes again and add the parameter values - this will have the same effect as using stored procedures, right?? I dont mind adding to the page load time since this is mainly going to affect the administration frontend, and will marginly affect the customer frontend.

    Read the article

  • jQuery hide ul header when all entries are deleted...

    - by Scott
    I'm a noob with jQuery...and I hope I've explained this well enough; I have a <ul> header that appears when I've added an entry to a dynamically created list using $.post. Each entry added has a delete/edit button associated with it. Header is this: <ul class="header"> <li>Month</li> <li>Year</li> <li>Cottage</li> </ul> My dynamic list that is created: <ul class="addedItems"> <li>Month</li> <li>Year</li> <li>Cottage</li> <li><span class="edit">edit</span></li> <li><span class="del">delete</span></li> </ul> This all looks like this: Month Year Cottage <--this appears after I've added an entry -------------------------------- and I want it to stick around unless all items are deleted. Dec 1990 Fir edit/delete <--entries Jan 2000 Willow edit/delete My question is: Is there some kind of conditional that I can use with jQuery to hide the class="header" if all the items are deleted? I've read up on conditional statements like is and not with jq but I'm not really understanding how they work. All of the items in class="addedItems" is stored in data produced by JSON. This is the delete function: $(".del").live("click", function(){ var del = this; var thisVal = $(del).val(); $.post("delete.php", { dirID : thisVal }, function(data){ if(confirm("Are you sure you want to DELETE this entry?") == true) { if(data.success) { //hide the class="header" here somwhere?? $(del).parents(".addedItems").hide(); } else if(data.error) { // throw error if item does not delete } } }, "json"); return false; }); //end of .del function Here is the delete.php <?php if($_POST) { $data['delID'] = $_POST['dirID']; $query = "DELETE from //tablename WHERE dirID = '{$data['delID']}' LIMIT 1"; $result = $db->query($query); if($result) { $data['success'] = true; $data['message'] = "Entry was successfully removed."; } else { $data['error'] = true; $data['message'] = "Item could not be deleted."; } echo json_encode($data); } ?>

    Read the article

  • Evaluation of CTEs in SQL Server 2005

    - by Jammer
    I have a question about how MS SQL evaluates functions inside CTEs. A couple of searches didn't turn up any results related to this issue, but I apologize if this is common knowledge and I'm just behind the curve. It wouldn't be the first time :-) This query is a simplified (and obviously less dynamic) version of what I'm actually doing, but it does exhibit the problem I'm experiencing. It looks like this: CREATE TABLE #EmployeePool(EmployeeID int, EmployeeRank int); INSERT INTO #EmployeePool(EmployeeID, EmployeeRank) SELECT 42, 1 UNION ALL SELECT 43, 2; DECLARE @NumEmployees int; SELECT @NumEmployees = COUNT(*) FROM #EmployeePool; WITH RandomizedCustomers AS ( SELECT CAST(c.Criteria AS int) AS CustomerID, dbo.fnUtil_Random(@NumEmployees) AS RandomRank FROM dbo.fnUtil_ParseCriteria(@CustomerIDs, 'int') c) SELECT rc.CustomerID, ep.EmployeeID FROM RandomizedCustomers rc JOIN #EmployeePool ep ON ep.EmployeeRank = rc.RandomRank; DROP TABLE #EmployeePool; The following can be assumed about all executions of the above: The result of dbo.fnUtil_Random() is always an int value greater than zero and less than or equal to the argument passed in. Since it's being called above with @NumEmployees which has the value 2, this function always evaluates to 1 or 2. The result of dbo.fnUtil_ParseCriteria(@CustomerIDs, 'int') produces a one-column, one-row table that contains a sql_variant with a base type of 'int' that has the value 219935. Given the above assumptions, it makes sense (to me, anyway) that the result of the expression above should always produce a two-column table containing one record - CustomerID and an EmployeeID. The CustomerID should always be the int value 219935, and the EmployeeID should be either 42 or 43. However, this is not always the case. Sometimes I get the expected single record. Other times I get two records (one for each EmployeeID), and still others I get no records. However, if I replace the RandomizedCustomers CTE with a true temp table, the problem vanishes completely. Every time I think I have an explanation for this behavior, it turns out to not make sense or be impossible, so I literally cannot explain why this would happen. Since the problem does not happen when I replace the CTE with a temp table, I can only assume it has something to do with the functions inside CTEs are evaluated during joins to that CTE. Do any of you have any theories?

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >