Search Results

Search found 1151 results on 47 pages for 'lazy'.

Page 43/47 | < Previous Page | 39 40 41 42 43 44 45 46 47  | Next Page >

  • Insertions into Zipper trees on XML files in Clojure

    - by ivar
    I'm confused as how to idiomatically change a xml tree accessed through clojure.contrib's zip-filter.xml. Should be trying to do this at all, or is there a better way? Say that I have some dummy xml file "itemdb.xml" like this: <itemlist> <item id="1"> <name>John</name> <desc>Works near here.</desc> </item> <item id="2"> <name>Sally</name> <desc>Owner of pet store.</desc> </item> </itemlist> And I have some code: (require '[clojure.zip :as zip] '[clojure.contrib.duck-streams :as ds] '[clojure.contrib.lazy-xml :as lxml] '[clojure.contrib.zip-filter.xml :as zf]) (def db (ref (zip/xml-zip (lxml/parse-trim (java.io.File. "itemdb.xml"))))) ;; Test that we can traverse and parse. (doall (map #(print (format "%10s: %s\n" (apply str (zf/xml-> % :name zf/text)) (apply str (zf/xml-> % :desc zf/text)))) (zf/xml-> @db :item))) ;; I assume something like this is needed to make the xml tags (defn create-item [name desc] {:tag :item :attrs {:id "3"} :contents (list {:tag :name :attrs {} :contents (list name)} {:tag :desc :attrs {} :contents (list desc)})}) (def fred-item (create-item "Fred" "Green-haired astrophysicist.")) ;; This disturbs the structure somehow (defn append-item [xmldb item] (zip/insert-right (-> xmldb zip/down zip/rightmost) item)) ;; I want to do something more like this (defn append-item2 [xmldb item] (zip/insert-right (zip/rightmost (zf/xml-> xmldb :item)) item)) (dosync (alter db append-item2 fred-item)) ;; Save this simple xml file with some added stuff. (ds/spit "appended-itemdb.xml" (with-out-str (lxml/emit (zip/root @db) :pad true))) I am unclear about how to use the clojure.zip functions appropriately in this case, and how that interacts with zip-filter. If you spot anything particularly weird in this small example, please point it out.

    Read the article

  • Sending multiline message via sockets without closing the connection

    - by Yasir Arsanukaev
    Hello folks. Currently I have this code of my client-side Haskell application: import Network.Socket import Network.BSD import System.IO hiding (hPutStr, hPutStrLn, hGetLine, hGetContents) import System.IO.UTF8 connectserver :: HostName -- ^ Remote hostname, or localhost -> String -- ^ Port number or name -> IO Handle connectserver hostname port = withSocketsDo $ do -- withSocketsDo is required on Windows -- Look up the hostname and port. Either raises an exception -- or returns a nonempty list. First element in that list -- is supposed to be the best option. addrinfos <- getAddrInfo Nothing (Just hostname) (Just port) let serveraddr = head addrinfos -- Establish a socket for communication sock <- socket (addrFamily serveraddr) Stream defaultProtocol -- Mark the socket for keep-alive handling since it may be idle -- for long periods of time setSocketOption sock KeepAlive 1 -- Connect to server connect sock (addrAddress serveraddr) -- Make a Handle out of it for convenience h <- socketToHandle sock ReadWriteMode -- Were going to set buffering to LineBuffering and then -- explicitly call hFlush after each message, below, so that -- messages get logged immediately hSetBuffering h LineBuffering return h sendid :: Handle -> String -> IO String sendid h id = do hPutStr h id -- Make sure that we send data immediately hFlush h -- Retrieve results hGetLine h The code portions in connectserver are from this chapter of Real World Haskell book where they say: When dealing with TCP data, it's often convenient to convert a socket into a Haskell Handle. We do so here, and explicitly set the buffering – an important point for TCP communication. Next, we set up lazy reading from the socket's Handle. For each incoming line, we pass it to handle. After there is no more data – because the remote end has closed the socket – we output a message about that. Since hGetContents blocks until the server closes the socket on the other side, I used hGetLine instead. It satisfied me before I decided to implement multiline output to client. I wouldn't like the server to close a socket every time it finishes sending multiline text. The only simple idea I have at the moment is to count the number of linefeeds and stop reading lines after two subsequent linefeeds. Do you have any better suggestions? Thanks.

    Read the article

  • How do you stop a user-instance of Sql Server? (Sql Express user instance database files locked, eve

    - by Bittercoder
    When using SQL Server Express 2005's User Instance feature with a connection string like this: <add name="Default" connectionString="Data Source=.\SQLExpress; AttachDbFilename=C:\My App\Data\MyApp.mdf; Initial Catalog=MyApp; User Instance=True; MultipleActiveResultSets=true; Trusted_Connection=Yes;" /> We find that we can't copy the database files MyApp.mdf and MyApp_Log.ldf (because they're locked) even after stopping the SqlExpress service, and have to resort to setting the SqlExpress service from automatic to manual startup mode, and then restarting the machine, before we can then copy the files. It was my understanding that stopping the SqlExpress service should stop all the user instances as well, which should release the locks on those files. But this does not seem to be the case - could anyone shed some light on how to stop a user instance, such that it's database files are no longer locked? Update OK, I stopped being lazy and fired up Process Explorer. Lock was held by sqlserver.exe - but there are two instances of sql server: sqlserver.exe PID: 4680 User Name: DefaultAppPool sqlserver.exe PID: 4644 User Name: NETWORK SERVICE The file is open by the sqlserver.exe instance with the PID: 4680 Stopping the "SQL Server (SQLEXPRESS)" service, killed off the process with PID: 4644, but left PID: 4680 alone. Seeing as the owner of the remaining process was DefaultAppPool, next thing I tried was stopping IIS (this database is being used from an ASP.Net application). Unfortunately this didn't kill the process off either. Manually killing off the remaining sql server process does remove the open file handle on the database files, allowing them to be copied/moved. Unfortunately I wish to copy/restore those files in some pre/post install tasks of a WiX installer - as such I was hoping there might be a way to achieve this by stopping a windows service, rather then having to shell out to kill all instances of sqlserver.exe as that poses some problems: Killing all the sqlserver.exe instances may have undesirable consequencies for users with other Sql Server instances on their machines. I can't restart those instances easily. Introduces additional complexities into the installer. Does anyone have any further thoughts on how to shutdown instances of sql server associated with a specific user instance?

    Read the article

  • rails named_scope ignores eager loading

    - by Craig
    Two models (Rails 2.3.8): User; username & disabled properties; User has_one :profile Profile; full_name & hidden properties I am trying to create a named_scope that eliminate the disabled=1 and hidden=1 User-Profiles. The User model is usually used in conjunction with the Profile model, so I attempt to eager-load the Profile model (:include = :profile). I created a named_scope on the User model called 'visible': named_scope :visible, { :joins => "INNER JOIN profiles ON users.id=profiles.user_id", :conditions => ["users.disabled = ? AND profiles.hidden = ?", false, false] } I've noticed that when I use the named_scope in a query, the eager-loading instruction is ignored. Variation 1 - User model only: # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <% end %> # generates a single query: SELECT * FROM `users` Variation 2 - use Profile model in view; lazy load Profile model # UserController @users = User.find(:all) # User's Index view <% for user in @users %> <p><%= user.username %></p> <p><%= user.profile.full_name %></p> <% end %> # generates multiple queries: SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SHOW FIELDS FROM `profiles` SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 5) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 6) ORDER BY full_name ASC LIMIT 1 Variation 3 - eager load Profile model # UserController @users = User.find(:all, :include => :profile) #view; no changes # two queries SELECT * FROM `users` SELECT `profiles`.* FROM `profiles` WHERE (`profiles`.user_id IN (1,2,3,4,5,6)) Variation 4 - use name_scope, including eager-loading instruction #UserConroller @users = User.visible(:include => :profile) #view; no changes # generates multiple queries SELECT `users`.* FROM `users` INNER JOIN profiles ON users.id=profiles.user_id WHERE (users.disabled = 0 AND profiles.hidden = 0) SELECT * FROM `profiles` WHERE (`profiles`.user_id = 1) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 2) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 3) ORDER BY full_name ASC LIMIT 1 SELECT * FROM `profiles` WHERE (`profiles`.user_id = 4) ORDER BY full_name ASC LIMIT 1 Variation 4 does return the correct number of records, but also appears to be ignoring the eager-loading instruction. Is this an issue with cross-model named scopes? Perhaps I'm not using it correctly. Is this sort of situation handled better by Rails 3?

    Read the article

  • NHibernate GenericADO Exception

    - by Ris90
    Hi, I'm trying to make simple many-to-one association, using NHibernate.. I have class Recruit with this mapping: <class name="Recruit" table="Recruits"> <id name="ID"> <generator class="native"/> </id> <property name="Lastname" column="lastname"/> <property name="Name" column="name"/> <property name="MedicalReport" column="medicalReport"/> <property name="DateOfBirth" column ="dateOfBirth" type="Date"/> <many-to-one name="AssignedOnRecruitmentOffice" column="assignedOnRecruitmentOffice" class="RecruitmentOffice"/> which is many-to-one connected to RecruitmentOffices: <class name="RecruitmentOffice" table="RecruitmentOffices"> <id name="ID" column="ID"> <generator class="native"/> </id> <property name="Chief" column="chief"/> <property name="Name" column="name"/> <property name ="Address" column="address"/> <set name="Recruits" cascade="save-update" inverse="true" lazy="true"> <key> <column name="AssignedOnRecruitmentOffice"/> </key> <one-to-many class="Recruit"/> </set> And create Repository class with method Insert: public void Insert(Recruit recruit) { using (ITransaction transaction = session.BeginTransaction()) { session.Save(recruit); transaction.Commit(); } } then I try to save new recrui to base: Recruit test = new Recruit(); RecruitmentOffice office = new RecruitmentOffice(); ofice.Name = "test"; office.Chief = "test"; test.AssignedOnRecruitmentOffice = office; test.Name = "test"; test.DateOfBirth = DateTime.Now; RecruitRepository testing = new RecruitRepository(); testing.Insert(test); And have this error GenericADOException could not insert: [OSiUBD.Models.DAO.Recruit][SQL: INSERT INTO Recruits (lastname, name, medicalReport, dateOfBirth, assignedOnRecruitmentOffice) VALUES (?, ?, ?, ?, ?); select SCOPE_IDENTITY()] on session.Save

    Read the article

  • Is this scatter-brained workflow realizable in Git?

    - by Luke Maurer
    This is what I'd like my workflow to look like at a conceptual level: I hack on my new feature for a while I notice a typo in a comment I change it Since the typo is completely unrelated to anything else, I put that change in a pile of comment fixes I keep working on the code I realize I need to flesh out a few utility functions I do so I put that change in its own pile Steps 2, 3, and 4 each repeat throughout the day I finish the new feature and put the changes for that feature in a pile I push nice patches upstream: One with the new feature, a few for the other tweaks, and one with a bunch of comment fixes if enough have accumulated Since I'm both lazy and a perfectionist, I want to be able to do some things out of order: I might correct a typo but forget to put it in the comment fix pile; when I prepare the upstream patches (I'm using git-svn, so I need to be pretty deliberate about these), I'll then pull out the comment fixes at that point. I might forget to separate things altogether until the very end. But I might /also/ have committed some of the piles along the way (sorry, the metaphor is breaking down …). This is all rather like just using Eclipse changesets with SVN, only I can have different changes to the same file in different piles (having to disentangle changes into different commits is what motivated me to move to git-svn, in fact …), and with Git I can have my full discombobulated change history, experimental branches and all, but still make a nice, neat patch. I've just recently started with Git after having wanted to for a good while, and I'm quite happy so far. The biggest way in which the above workflow doesn't really map into Git, though, is that a “bin” can't really be just a local branch, since the working tree only ever reflects the state of a single branch. Or maybe the Git index is a “pile,” and what I want is to have more than one somehow (effectively). I can think of a few ways to approximate what I want (maybe creative use of stash? Intricate stash-checkout-merge dances?), but my grasp on Git isn't solid enough to be sure of how best to put all the pieces together. It's said that Git is more a toolkit than a VCS, so I guess the question comes down to: How do I build this thing with these tools?

    Read the article

  • JSF: how to update the list after delete an item of that list

    - by Harry Pham
    It will take a moment for me to explain this, so please stay with me. I have table COMMENT that has OneToMany relationship with itself. @Entity public class Comment(){ ... @ManyToOne(optional=true, fetch=FetchType.LAZY) @JoinColumn(name="REPLYTO_ID") private Comment replyTo; @OneToMany(mappedBy="replyTo", cascade=CascadeType.ALL) private List<Comment> replies = new ArrayList<Comment>(); public void addReply(NewsFeed reply){ replies.add(reply); reply.setReplyTo(this); } public void removeReply(NewsFeed reply){ replies.remove(reply); } } So you can think like this. Each comment can have a List of replies which are also type Comment. Now it is very easy for me to delete the original comment and get the updated list back. All I need to do after delete is this. allComments = myEJB.getAllComments(); //This will query the db and return updated list But I am having problem when trying to delete replies and getting the updated list back. So here is how I delete the replies. Inside my managed bean I have //Before invoke this method, I have the value of originalFeed, and deletedFeed set. //These original comments are display inside a p:dataTable X, and the replies are //displayed inside p:dataTable Y which is inside X. So when I click the delete button //I know which comment I want to delete, and if it is the replies, I will know //which one is its original post public void deleteFeed(){ if(this.deletedFeed != null){ scholarEJB.deleteFeeds(this.deletedFeed); if(this.originalFeed != null){ //Since the originalFeed is not null, this is the `replies` //that I want to delete scholarEJB.removeReply(this.originalFeed, this.deletedFeed); } feeds = scholarEJB.findAllFeed(); } } Then inside my EJB scholarEJB, I have public void removeReply(NewsFeed comment, NewsFeed reply){ comment = em.merge(comment); comment.removeReply(reply); em.persist(comment); } public void deleteFeeds(NewsFeed e){ e = em.find(NewsFeed.class, e.getId()); em.remove(e); } When I get out, the entity (the reply) get correctly removed from the database, but inside the feeds List, reference of that reply still there. It only until I log out and log back in that the reply disappear. Please help

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • RenderTargetBitmap + Resource'd VisualBrush = incomplete image

    - by Will
    I've found a new twist on the "Visual to RenderTargetBitmap" question! I'm rendering previews of WPF stuff for a designer. That means I need to take a WPF visual and render it to a bitmap without that visual ever being displayed. Got a nice little method to do it like to see it here it goes private static BitmapSource CreateBitmapSource(FrameworkElement visual) { Border b = new Border { Width = visual.Width, Height = visual.Height }; b.BorderBrush = Brushes.Black; b.BorderThickness = new Thickness(1); b.Background = Brushes.White; b.Child = visual; b.Measure(new Size(b.Width, b.Height)); b.Arrange(new Rect(b.DesiredSize)); RenderTargetBitmap rtb = new RenderTargetBitmap( (int)b.ActualWidth, (int)b.ActualHeight, 96, 96, PixelFormats.Pbgra32); // intermediate step here to ensure any VisualBrushes are rendered properly DrawingVisual dv = new DrawingVisual(); using (var dc = dv.RenderOpen()) { var vb = new VisualBrush(b); dc.DrawRectangle(vb, null, new Rect(new Point(), b.DesiredSize)); } rtb.Render(dv); return rtb; } Works fine, except for one leeetle thing... if my FrameworkElement has a VisualBrush, that brush doesn't end up in the final rendered bitmap. Something like this: <UserControl.Resources> <VisualBrush x:Key="LOLgo"> <VisualBrush.Visual> <!-- blah blah --> <Grid Background="{StaticResource LOLgo}"> <!-- yadda yadda --> Everything else renders to the bitmap, but that VisualBrush just won't show. The obvious google solutions have been attempted and have failed. Even the ones that specifically mention VisualBrushes missing from RTB'd bitmaps. I have a sneaky suspicion this might be caused by the fact that its a Resource, and that lazy resource isn't being inlined. So a possible fix would be to, somehow(???), force resolution of all static resource references before rendering. But I have absolutely no idea how to do that. Anybody have a fix for this?

    Read the article

  • JavaFX - question regarding binding button's disabled state

    - by jamiebarrow
    I'm trying to create a dummy application that maintains a list of tasks. For now, all I'm trying to do is add to the list. I enter a task name in a text box, click on the add task button, and expect the list to be updated with the new item and the task name input to be cleared. I only want to be able to add tasks if the task name is not empty. The below code is my implementation, but I have a question regarding the binding. I'm binding the textbox's text variable to a string in my view model, and the button's disable variable to a boolean in my view model. I have a trigger to update the disabled state when the task name changes. When the binding of the task name happens the boolean is updated accordingly, but the button still appears disabled. But then when I mouse over the button, it becomes enabled. I believe this is due to JavaFX 1.3's binding being lazy - only updates the bound variable when it is read. Also, when I've added the task, I clear the task name in the model, but the textbox's text doesn't change - even though I'm using bind with inverse. Is there a way to make the textbox's text and the button's disabled state update automatically via the binding as I was expecting? Thanks, James AddTaskViewModel.fx: package jamiebarrow; import java.lang.System; public class AddTaskViewModel { function logChange(prop:String,oldValue,newValue):Void { println("{System.currentTimeMillis()} : {prop} [{oldValue}] to [{newValue}] "); } public var newTaskName: String on replace old { logChange("newTaskName",old,newTaskName); isAddTaskDisabled = (newTaskName == null or newTaskName.trim().length() == 0); }; public var isAddTaskDisabled: Boolean on replace old { logChange("isAddTaskDisabled",old,isAddTaskDisabled); }; public var taskItems = [] on replace old { logChange("taskItems",old,taskItems); }; public function addTask() { insert newTaskName into taskItems; newTaskName = ""; } } Main.fx: package jamiebarrow; import javafx.scene.control.Button; import javafx.scene.control.TextBox; import javafx.scene.control.ListView; import javafx.scene.Scene; import javafx.scene.layout.VBox; import javafx.stage.Stage; import javafx.scene.layout.HBox; def viewModel = AddTaskViewModel{}; var txtName: TextBox = TextBox { text: bind viewModel.newTaskName with inverse onKeyTyped: onKeyTyped }; function onKeyTyped(event): Void { txtName.commit(); // ensures model is updated cmdAddTask.disable = viewModel.isAddTaskDisabled;// the binding only occurs lazily, so this is needed } var cmdAddTask = Button { text: "Add" disable: bind viewModel.isAddTaskDisabled with inverse action: onAddTask }; function onAddTask(): Void { viewModel.addTask(); } var lstTasks = ListView { items: bind viewModel.taskItems with inverse }; Stage { scene: Scene { content: [ VBox { content: [ HBox { content: [ txtName, cmdAddTask ] }, lstTasks ] } ] } }

    Read the article

  • Should an event-sourced aggregate root have access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store by id for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • XmlHttpRequest bug?

    - by valdo
    Hello all. I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job. Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread). Nice-to-have requirements: Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file. It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status. I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode. However I noticed that after some period it just stops working. That is, after several successful file downloads it stops downloading anything. I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state). The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object. Does anyone know something about this? Is this a known bug? I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working. BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.

    Read the article

  • JSON or YAML encoding in GWT/Java on both client and server

    - by KennethJ
    I'm looking for a super simple JSON or YAML library (not particularly bothered which one) written in Java, and can be used in both GWT on the client, and in its original Java form on the server. What I'm trying to do is this: I have my models, which are shared between the client and the server, and these are the primary source of data interchange. I want to design the web service in between to be as simple as possible, and decided to take the RESTful approach. My problem is that I know our application will grow substantially in the future, and writing all the getters, setters, serialization, factories, etc. by hand fills me with absolute dread. So in order to avoid it, I decided to implement annotations to keep track of attributes on the models. The reason I can't just serialize everything directly, using GWT's own one, or one which works through reflection, is because we need a certain amount of logic going on in the serialization process. I.e. whether references to other models get serialized during the serialization of the original model, or whether an ID is just passed, and general simple things like that. I've then written an annotation processor to preprocess my shared models and generate an implementing class with all the getters, setters, serialization, lazy-loading, etc. To make a long story short, I need some type of simple YAML or JSON library, which allows me to encode and decode manually, so I can generate this code through my annotation processor. I have had a look around the interwebs, but every single one I ran into supported some reflection which, while all fine and dandy, make it pretty much useless for GWT. And in the case of GWT's own JSON library, it uses JSNI for speed purposes, making it useless server side. One solution I did think about involved writing writing two sets of serialization methods on the models, one for the client and one for the server, but I'd rather not do that. Also, I'm pretty new to GWT, and even though I have done a lot of Java, it was back in the 1.2 days, so it's a bit rusty. So if you think I'm going about this problem completely the wrong way, I'm open to suggestions.

    Read the article

  • Why does Scala apply thunks automatically, sometimes?

    - by Anonymouse
    At just after 2:40 in ShadowofCatron's Scala Tutorial 3 video, it's pointed out that the parentheses following the name of a thunk are optional. "Buh?" said my functional programming brain, since the value of a function and the value it evaluates to when applied are completely different things. So I wrote the following to try this out. My thought process is described in the comments. object Main { var counter: Int = 10 def f(): Int = { counter = counter + 1; counter } def runThunk(t: () => Int): Int = { t() } def main(args: Array[String]): Unit = { val a = f() // I expect this to mean "apply f to no args" println(a) // and apparently it does val b = f // I expect this to mean "the value f", a function value println(b) // but it's the value it evaluates to when applied to no args println(b) // and the evaluation happens immediately, not in the call runThunk(b) // This is an error: it's not println doing something funny runThunk(f) // Not an error: seems to be val doing something funny } }   To be clear about the problem, this Scheme program (and the console dump which follows) shows what I expected the Scala program to do. (define counter (list 10)) (define f (lambda () (set-car! counter (+ (car counter) 1)) (car counter))) (define runThunk (lambda (t) (t))) (define main (lambda args (let ((a (f)) (b f)) (display a) (newline) (display b) (newline) (display b) (newline) (runThunk b) (runThunk f)))) > (main) 11 #<procedure:f> #<procedure:f> 13   After coming to this site to ask about this, I came across this answer which told me how to fix the above Scala program: val b = f _ // Hey Scala, I mean f, not f() But the underscore 'hint' is only needed sometimes. When I call runThunk(f), no hint is required. But when I 'alias' f to b with a val then apply it, it doesn't work: the evaluation happens in the val; and even lazy val works this way, so it's not the point of evaluation causing this behaviour.   That all leaves me with the question: Why does Scala sometimes automatically apply thunks when evaluating them? Is it, as I suspect, type inference? And if so, shouldn't a type system stay out of the language's semantics? Is this a good idea? Do Scala programmers apply thunks rather than refer to their values so much more often that making the parens optional is better overall? Examples written using Scala 2.8.0RC3, DrScheme 4.0.1 in R5RS.

    Read the article

  • Memory allocation and release for UIImage in iPhone?

    - by rkbang
    Hello all, I am using following code in iPhone to get smaller cropped image as follows: - (UIImage*) getSmallImage:(UIImage*) img { CGSize size = img.size; CGFloat ratio = 0; if (size.width < size.height) { ratio = 36 / size.width; } else { ratio = 36 / size.height; } CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height); UIGraphicsBeginImageContext(rect.size); [img drawInRect:rect]; UIImage *tempImg = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); return [tempImg autorelease]; } - (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect { //create a context to do our clipping in UIGraphicsBeginImageContext(rect.size); CGContextRef currentContext = UIGraphicsGetCurrentContext(); //create a rect with the size we want to crop the image to //the X and Y here are zero so we start at the beginning of our //newly created context CGFloat X = (imageToCrop.size.width - rect.size.width)/2; CGFloat Y = (imageToCrop.size.height - rect.size.height)/2; CGRect clippedRect = CGRectMake(X, Y, rect.size.width, rect.size.height); //CGContextClipToRect( currentContext, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(0, 0, imageToCrop.size.width, imageToCrop.size.height); CGContextTranslateCTM(currentContext, 0.0, drawRect.size.height); CGContextScaleCTM(currentContext, 1.0, -1.0); //draw the image to our clipped context using our offset rect //CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage); CGImageRef tmp = CGImageCreateWithImageInRect(imageToCrop.CGImage, clippedRect); //pull the image from our cropped context UIImage *cropped = [UIImage imageWithCGImage:tmp];//UIGraphicsGetImageFromCurrentImageContext(); CGImageRelease(tmp); //pop the context to get back to the default UIGraphicsEndImageContext(); //Note: this is autoreleased*/ return cropped; } I am using following line of code in cellForRowAtIndexPath to update the image of the cell: cell.img.image = [self imageByCropping:[self getSmallImage:[UIImage imageNamed:@"goal_image.png"]] toRect:CGRectMake(0, 0, 36, 36)]; Now when I add this table view and pop it from navigation controller, I see a memory hike.I see no leaks but memory keeps climbing. Please note that the images changes for each row and I am creating the controller using lazy initialization that is I create or alloc it whenever I need it. I saw on internet many people facing the same issue, but very rare good solutions. I have multiple views using the same way and I see almost memory raised to 4MB within 20-25 view transitions. What is the good solution to resolve this issue. tnx.

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • [N]Hibernate: view-like fetching properties of associated class

    - by chiccodoro
    (Felt quite helpless in formulating an appropriate title...) In my C# app I display a list of "A" objects, along with some properties of their associated "B" objects and properties of B's associated "C" objects: A.Name B.Name B.SomeValue C.Name Foo Bar 123 HelloWorld Bar Hello 432 World ... To clarify: A has an FK to B, B has an FK to C. (Such as, e.g. BankAccount - Person - Company). I have tried two approaches to load these properties from the database (using NHibernate): A fast approach and a clean approach. My eventual question is how to do a fast & clean approach. Fast approach: Define a view in the database which joins A, B, C and provides all these fields. In the A class, define properties "BName", "BSomeValue", "CName" Define a hibernate mapping between A and the View, whereas the needed B and C properties are mapped with update="false" insert="false" and do actually stem from B and C tables, but Hibernate is not aware of that since it uses the view. This way, the listing only loads one object per "A" record, which is quite fast. If the code tries to access the actual associated property, "A.B", I issue another HQL query to get B, set the property and update the faked BName and BSomeValue properties as well. Clean approach: There is no view. Class A is mapped to table A, B to B, C to C. When loading the list of A, I do a double left-join-fetch to get B and C as well: from A a left join fetch a.B left join fetch a.B.C B.Name, B.SomeValue and C.Name are accessed through the eagerly loaded associations. The disadvantage of this approach is that it gets slower and takes more memory, since it needs to created and map 3 objects per "A" record: An A, B, and C object each. Fast and clean approach: I feel somehow uncomfortable using a database view that hides a join and treat that in NHibernate as if it was a table. So I would like to do something like: Have no views in the database. Declare properties "BName", "BSomeValue", "CName" in class "A". Define the mapping for A such that NHibernate fetches A and these properties together using a join SQL query as a database view would do. The mapping should still allow for defining lazy many-to-one associations for getting A.B.C My questions: Is this possible? Is it [un]artful? Is there a better way?

    Read the article

  • Asp.net Mvc - Kigg: Maintain User object in HttpContext.Items between requests.

    - by Pickels
    Hallo, first I want to say that I hope this doesn't look like I am lazy but I have some trouble understanding a piece of code from the following project. http://kigg.codeplex.com/ I was going through the source code and I noticed something that would be usefull for my own little project I am making. In their BaseController they have the following code: private static readonly Type CurrentUserKey = typeof(IUser); public IUser CurrentUser { get { if (!string.IsNullOrEmpty(CurrentUserName)) { IUser user = HttpContext.Items[CurrentUserKey] as IUser; if (user == null) { user = AccountRepository.FindByClaim(CurrentUserName); if (user != null) { HttpContext.Items[CurrentUserKey] = user; } } return user; } return null; } } This isn't an exact copy of the code I adjusted it a little to my needs. This part of the code I still understand. They store their IUser in HttpContext.Items. I guess they do it so that they don't have to call the database eachtime they need the User object. The part that I don't understand is how they maintain this object in between requests. If I understand correctly the HttpContext.Items is a per request cache storage. So after some more digging I found the following code. internal static IDictionary<UnityPerWebRequestLifetimeManager, object> GetInstances(HttpContextBase httpContext) { IDictionary<UnityPerWebRequestLifetimeManager, object> instances; if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { lock (httpContext.Items) { if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { instances = new Dictionary<UnityPerWebRequestLifetimeManager, object>(); httpContext.Items.Add(Key, instances); } } } return instances; } This is the part where some magic happens that I don't understand. I think they use Unity to do some dependency injection on each request? In my project I am using Ninject and I am wondering how I can get the same result. I guess InRequestScope in Ninject is the same as UnityPerWebRequestLifetimeManager? I am also wondering which class/method they are binding to which interface? Since the HttpContext.Items get destroyed each request how do they prevent losing their user object? Anyway it's kinda a long question so I am gradefull for any push in the right direction. Kind regards, Pickels

    Read the article

  • Hibernate3: Self-Referencing Objects

    - by monojohnny
    Need some help on understanding how to do this; I'm going to be running recursive 'find' on a file system and I want to keep the information in a single DB table - with a self-referencing hierarchial structure: This is my DB Table structure I want to populate. DirObject Table: id int NOT NULL, name varchar(255) NOT NULL, parentid int NOT NULL); Here is the proposed Java Class I want to map (Fields only shown): public DirObject { int id; String name; DirObject parent; ... For the 'root' directory was going to use parentid=0; real ids will start at 1, and ideally I want hibernate to autogenerate the ids. Can somebody provide a suggested mapping file for this please; as a secondary question I thought about doing the Java Class like this instead: public DirObject { int id; String name; List<DirObject> subdirs; Could I use the same data model for either of these two methods ? (With a different mapping file of course). --- UPDATE: so I tried the mapping file suggested below (thanks!), repeated here for reference: <hibernate-mapping> <class name="my.proj.DirObject" table="category"> ... <set name="subDirs" lazy="true" inverse="true"> <key column="parentId"/> <one-to-many class="my.proj.DirObject"/> </set> <many-to-one name="parent" class="my.proj.DirObject" column="parentId" cascade="all" /> </class> ...and altered my Java class to have BOTH 'parentid' and 'getSubDirs' [returning a 'HashSet']. This appears to work - thanks, but this is the test code I used to drive this - I think I'm not doing something right here, because I thought Hibernate would take care of saving the subordinate objects in the Set without me having to do this explicitly ? DirObject dirobject=new DirObject(); dirobject.setName("/files"); dirobject.setParent(dirobject); DirObject d1, d2; d1=new DirObject(); d1.setName("subdir1"); d1.setParent(dirobject); d2=new DirObject(); d2.setName("subdir2"); d2.setParent(dirobject); HashSet<DirObject> subdirs=new HashSet<DirObject>(); subdirs.add(d1); subdirs.add(d2); dirobject.setSubdirs(subdirs); session.save(dirobject); session.save(d1); session.save(d2);

    Read the article

  • How do I call a function name that is stored in a hash in Perl?

    - by Ether
    I'm sure this is covered in the documentation somewhere but I have been unable to find it... I'm looking for the syntactic sugar that will make it possible to call a method on a class whose name is stored in a hash (as opposed to a simple scalar): use strict; use warnings; package Foo; sub foo { print "in foo()\n" } package main; my %hash = (func => 'foo'); Foo->$hash{func}; If I copy $hash{func} into a scalar variable first, then I can call Foo->$func just fine... but what is missing to enable Foo->$hash{func} to work? (EDIT: I don't mean to do anything special by calling a method on class Foo -- this could just as easily be a blessed object (and in my actual code it is); it was just easier to write up a self-contained example using a class method.) EDIT 2: Just for completeness re the comments below, this is what I'm actually doing (this is in a library of Moose attribute sugar, created with Moose::Exporter): # adds an accessor to a sibling module sub foreignTable { my ($meta, $table, %args) = @_; my $class = 'MyApp::Dir1::Dir2::' . $table; my $dbAccessor = lcfirst $table; eval "require $class" or do { die "Can't load $class: $@" }; $meta->add_attribute( $table, is => 'ro', isa => $class, init_arg => undef, # don't allow in constructor lazy => 1, predicate => 'has_' . $table, default => sub { my $this = shift; $this->debug("in builder for $class"); ### here's the line that uses a hash value as the method name my @args = ($args{primaryKey} => $this->${\$args{primaryKey}}); push @args, ( _dbObject => $this->_dbObject->$dbAccessor ) if $args{fkRelationshipExists}; $this->debug("passing these values to $class -> new: @args"); $class->new(@args); }, ); } I've replaced the marked line above with this: my $pk_accessor = $this->meta->find_attribute_by_name($args{primaryKey})->get_read_method_ref; my @args = ($args{primaryKey} => $this->$pk_accessor); PS. I've just noticed that this same technique (using the Moose meta class to look up the coderef rather than assuming its naming convention) cannot also be used for predicates, as Class::MOP::Attribute does not have a similar get_predicate_method_ref accessor. :(

    Read the article

  • g++ linker error--typeinfo, but not vtable

    - by James
    I know the standard answer for a linker error about missing typeinfo usually also involves vtable and some virtual function that I forgot to actually define. I'm fairly certain that's not the situation this time. Here's the error: UI.o: In function boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::Resource::GroupByState>(boost::shared_ptr<Graphics::Resource::GroupByState> const&, boost::detail::dynamic_cast_tag)': UI.cpp:(.text._ZN5boost10shared_ptrIN8Graphics7Widgets9WidgetSetEEC1INS1_8Resource12GroupByStateEEERKNS0_IT_EENS_6detail16dynamic_cast_tagE[boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::Resource::GroupByState>(boost::shared_ptr<Graphics::Resource::GroupByState> const&, boost::detail::dynamic_cast_tag)]+0x30): undefined reference totypeinfo for Graphics::Widgets::WidgetSet' Running c++filt on the obnoxious mangled name shows that it actually is looking at .boost::shared_ptr::shared_ptr(boost::shared_ptr const&, boost::detail::dynamic_cast_tag) The inheritance hierarchy looks something like class AbstractGroup { typedef boost::shared_ptr<AbstractGroup> Ptr; ... }; class WidgetSet : public AbstractGroup { typedef boost::shared_ptr<WidgetSet> Ptr; ... }; class GroupByState : public AbstractGroup { ... }; Then there's this: class UI : public GroupByState { ... void LoadWidgets( GroupByState::Ptr resource ); }; Then the original implementation: void UI::LoadWidgets( GroupByState::Ptr resource ) { WidgetSet::Ptr tmp( boost::dynamic_pointer_cast< WidgetSet >(resource) ); if( tmp ) { ... } } Stupid error on my part (trying to cast to a sibling class with a shared parent), even if the error is kind of cryptic. Changing to this: void UI::LoadWidgets( AbstractGroup::Ptr resource ) { WidgetSet::Ptr tmp( boost::dynamic_pointer_cast< WidgetSet >(resource) ); if( tmp ) { ... } } (which I'm fairly sure is what I actually meant to be doing) left me with a very similar error: UI.o: In function boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::_Drawer::Group>(boost::shared_ptr<Graphics::_Drawer::Group> const&, boost::detail::dynamic_cast_tag)': UI.cpp:(.text._ZN5boost10shared_ptrIN8Graphics7Widgets9WidgetSetEEC1INS1_7_Drawer5GroupEEERKNS0_IT_EENS_6detail16dynamic_cast_tagE[boost::shared_ptr<Graphics::Widgets::WidgetSet>::shared_ptr<Graphics::_Drawer::Group>(boost::shared_ptr<Graphics::_Drawer::Group> const&, boost::detail::dynamic_cast_tag)]+0x30): undefined reference totypeinfo for Graphics::Widgets::WidgetSet' collect2: ld returned 1 exit status dynamic_cast_tag is just an empty struct in boost/shared_ptr.hpp. It's just a guess that boost might have anything at all to do with the error. Passing in a WidgetSet::Ptr totally eliminates the need for a cast, and it builds fine (which is why I think there's more going on than the standard answer for this question). Obviously, I'm trimming away a lot of details that might be important. My next step is to cut it down to the smallest example that fails to build, but I figured I'd try the lazy way out and take a stab on here first. TIA!

    Read the article

  • [NHibernate and ASP.NET MVC] How can I implement a robust session-per-request pattern in my project,

    - by Guillaume Gervais
    I'm currently building an ASP.NET MVC project, with NHibernate as its persistance layer. For now, some functionnalities have been implemented, but only use local NHibernate sessions: each method that accessed the database (read or write) needs to instanciate its own NHibernate session, with the "using()" directive. The problem is that I want to leverage NHibernate's Lazy-Loading capabilities to improve the performance of my project. This implies an open NHibernate session per request until the view is rendered. Furthermore, simultaneous request must be supported (multiple Sessions at the same time). How can I achieve that as cleanly as possible? I searched the Web a little bit and learned about the session-per-request pattern. Most of the implementations I saw used some sort of Http* (HttpContext, etc.) object to store the session. Also, using the Application_BeginRequest/Application_EndRequest functions is complicated, since they get fired for each HTTP request (aspx files, css files, js files, etc.), when I only want to instanciate a session once per request. The concern that I have is that I don't want my views or controllers to have access to NHibernate sessions (or, more generally, NHibernate namespaces and code). That means that I do not want to handle sessions at the controller level nor the view one. I have a few options in mind. Which one seems the best ? Use interceptors (like in GRAILS) that get triggered before and after the controller action. These would open and close sessions/transactions. Is it possible in the ASP.NET MVC world? Use the CurrentSessionContext Singleton provided by NHibernate in a Web context. Using this page as an example, I think this is quite promising, but that still requires filters at the controller level. Use the HttpContext.Current.Items to store the request session. This, coupled with a few lines of code in Global.asax.cs, can easily provide me with a session on the request level. However, it means that dependencies will be injected between NHibernate and my views (HttpContext). Thank you very much!

    Read the article

  • Force deletion of slot in boost::signals2

    - by villintehaspam
    Hi! I have found that boost::signals2 uses sort of a lazy deletion of connected slots, which makes it difficult to use connections as something that manages lifetimes of objects. I am looking for a way to force slots to be deleted directly when disconnected. Any ideas on how to work around the problem by designing my code differently are also appreciated! This is my scenario: I have a Command class responsible for doing something that takes time asynchronously, looking something like this (simplified): class ActualWorker { public: boost::signals2<void ()> OnWorkComplete; }; class Command : boost::enable_shared_from_this<Command> { public: ... void Execute() { m_WorkerConnection = m_MyWorker.OnWorkDone.connect(boost::bind(&Command::Handle_OnWorkComplete, shared_from_this()); // launch asynchronous work here and return } boost::signals2<void ()> OnComplete; private: void Handle_OnWorkComplete() { // get a shared_ptr to ourselves to make sure that we live through // this function but don't keep ourselves alive if an exception occurs. shared_ptr<Command> me = shared_from_this(); // Disconnect from the signal, ideally deleting the slot object m_WorkerConnection.disconnect(); OnComplete(); // the shared_ptr now goes out of scope, ideally deleting this } ActualWorker m_MyWorker; boost::signals2::connection m_WorkerConnection; }; The class is invoked about like this: ... boost::shared_ptr<Command> cmd(new Command); cmd->OnComplete.connect( foo ); cmd->Execute(); // now go do something else, forget all about the cmd variable etcetera. the Command class keeps itself alive by getting a shared_ptr to itself which is bound to the ActualWorker signal using boost::bind. When the worker completes, the handler in Command is invoked. Now, since I would like the Command object to be destroyed, I disconnect from the signal as can be seen in the code above. The problem is that the actual slot object is not deleted when disconnected, it is only marked as invalid and then deleted at a later time. This in turn appears to depend on the signal to fire again, which it doesn't do in my case, leading to the slot never expiring. The boost::bind object thus never goes out of scope, holding a shared_ptr to my object that will never get deleted. I can work around this by binding using the this pointer instead of a shared_ptr and then keeping my object alive using a member shared_ptr which I then release in the handler function, but it kind of makes the design feel a bit overcomplicated. Is there a way to force signals2 to delete the slot when disconnecting? Or is there something else I could do to simplify the design? Any comments are appreciated!

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Agile Awakenings and the Rules of Agile

    - by Robert May
    For those that care, you can read my history of management and technology to understand why I think I’m qualified to talk about this at all.  It’s boring, so feel free to skip it. Awakenings I first started to play around with the idea of “agile” in 2004 or 2005.  I found a book on the Rational Unified Process that I thought was good, and attempted to implement parts of it.  I thought I was agile, but really, it wasn’t.   I still didn’t understand the concept of a team.  I still wanted to tell the team what to do and how to get it done.  I still thought I was smarter than the team. After that job, I started work on another project and began helping that team.  The first few months were really rough.  We were implementing Scrum, which was relatively new to everyone on the team, and, quite frankly, I was doing a poor job of it.  I was trying to micro-manage every aspect of the teams work, and we were all miserable. The moment of change came when the senior architect bailed on the project.  His comment to me was: “This isn’t Agile.  Where are the stand-ups?  Where are the stories?”  He was dead on, and I finally woke up.  I finally realized that I was the problem!  I wasn’t trusting the team.  I wasn’t helping the team.  I was being a manager. Like many (most?), I was claiming to be Agile and use Scrum, but I wasn’t in fact following the rules Scrum.  Since then, I’ve done a lot of studying, hands on practice, coaching of many different teams, and other learning around Scrum, and I have discovered that Scrum has some rules that must be followed for success, even though the process is about continuous improvement. I’ve been practicing Scrum right for about 4 years now and have helped multiple teams implement it successfully, so what you’re about to get is based on experience, rather than just theory. The Rules of Scrum In my experience, what I’ve found is that most companies that claim to be doing Scrum or Agile are actually NOT doing either.  This stems largely because they think that they can “adopt the rules of Agile that fit their organization.”  Sadly, many of them think that this means they can adopt iterations (sprints) and not much else.  Either that, or they think they can do whatever they want, or were doing before, and call it Scrum.  This is simply not true. Here are some rules that must be followed for you to really be doing Scrum.  I’ll go into detail on each one of these posts in future blog posts and update links here.  My intent is that this will help other teams implementing scrum to see more success. Agile does not allow you to do whatever you want A Product Owner is required A ScrumMaster is required The team must function as a Team, and QA must be part of the team Support from upper management is required A prioritized product backlog is required A prioritized sprint backlog is required Release planning is required Complete spring planning is required Showcases are required Velocity must be measured Retrospectives are required Daily stand-ups are required Visibility is absolutely required For now, I think that’s enough, although I reserve the right to add more.  If you’re breaking any of these rules, you’re probably not doing Scrum.  There are exceptions to these rules, but until you have practiced Scrum for a while, you don’t know what those exceptions are. Breaking the Rules Many teams break these rules because they are the ones that expose the most pain.  Scrum is not Advil.  It’s not intended to mask the pain, its intended to cure it.  Let me explain that analogy a bit more.  Recently, my 7 year old son broke his arm, quite severely (see the X-Ray to the right).  That caused him a great deal of pain.  We went first to one doctor, and after viewing the X-Ray, they determined that there was no way that they’d cast the arm at their location.  It was simply too bad of a break for them to deal with.  They did, however, give him some Advil for the pain and put a splint on his arm to stabilize the broken bones.  Within minutes, he was feeling much better.  Had we been stupid, we could have gone home and he’d have been just as happy as ever . . . until the pain medication wore off or one of his siblings touched the splint.  Then, all of that pain would come right back to the top.  Sure, he could make it go away by just taking more Advil and moving the splint out of the way, but that wasn’t going to fix the problem permanently. We ended up in an emergency room with a doctor who could fix his arm.  However, we were warned that the fix was going to be VERY painful, and it was.  Even with heavy sedation (Propofol), my son was in enough pain that he squirmed and wiggled trying to get his arm away from the doctor.  He had to endure this pain in order to have a functional arm. But the setting wasn’t the end.  He had to have several casts, had to have it re-broken once, since the first setting didn’t take and finally was given a clean bill of health. Agile implementation is much like this story.  Agile was developed as a result of people recognizing that the development methodologies that were currently in place simply were ineffective.  However, the fix to the broken development that’s been festering for many years is not painless.  Many people start Agile thinking that things will be wonderful.  They won’t!  Agile is about visibility, and often, it brings great pain to surface.  It causes all of the missed deadlines, the cowboy coders, the coasters, the micro-managers, the lazy, and all of the other problems that are really part of your development process now to become painfully visible to EVERYONE.  Many people don’t like this exposure.  Agile will make the pain better, but not if you remove the cast (the rules above) prematurely and start breaking the rules that expose the most pain.  The healing will take time and is not instant (like Advil).  Figuring out what the true source of pain and fixing it is very valuable to you, your team, and your company.  Remember as you’re doing this that Agile isn’t the source of the pain, it’s really just exposing it.  Find the source. My recommendation is that ALL of these rules are followed for a minimum of six months, and preferably for an entire year, before you decide to break any of these rules.  Get a few good releases under your belt.  Figure out what your velocity is and start firing as a team.  Chances are, after you see agile really in action, you won’t want to break the rules because you’ll see their value. More Reading Jean Tabaka recently published a list of 78 Things I Have Learned in 6 Years of Agile Coaching.  Highly recommended. Technorati Tags: Agile,Scrum,Rules

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47  | Next Page >