Search Results

Search found 1012 results on 41 pages for 'funky si'.

Page 37/41 | < Previous Page | 33 34 35 36 37 38 39 40 41  | Next Page >

  • Day 2 - Game Design Documentation

    - by dapostolov
    So yesterday I didn't cut any code for my game but I was able to do a tiny bit of research on the XNA Game Development Technology and the communities out there and do you know what? I feel I'm a bit closer to my goal. The bad news is today I didn't cut code either. However, not all is lost because I wanted to get my ideas on paper and today I just did that.  Today, I began to jot down notes about the game and how I felt the visual elements would interact with each other. Unlike my workplace, my personal level of documentation is nothing more than a task list or a mind map of my ideas; it helps me streamline my solutions quiet effectively and circumvent the long process of articulating each thought to the n-th degree. I truly dislike documentation (because I have an extremely hard time articulating my thought and solutions); however, because I tend to do a really good job with documentation I tend to get stuck writing the buggers. But as a generalist remark: 'No Developer likes documentation.' For now let's stick with my basic notes and call this post a living document. Here are my notes, fresh, from after watching the new first episode of Merlin second season! Actually, a quick recommendation to anyone who is reading this (if anyone is): I truly recommend you envelope yourself in the medium or task you're trying to tackle. Be one with moment and feel it! For instance: Are you writing a fantasy script / game? What would the music of the genre sound like? For me the Conan the Barbarian soundtrack by Basil Poledouris is frackin awesome. There are many other good CD's out there, which I listen to (some who even use medival instruments, but Conan I keep returning to. It's a creative trigger for me. Ask yourself what would the imagery look like? Time to surf google for artist renditions of fantasy! What would the game feel like? Start playing some of your favorite games that inspire you, be wary though, have some self control and don't let it absorb your time. Anyhow, onto the documentation... Screens, Scenes, and Sprites. Oh My! (groan...) The first thing that came to mind were the screens, I thought the following would suffice: Menu Screen Character Customisation Screen Loading Screen? Battle Ground The Menu Screen Ok. So, the thought here is when the game loads a huge title is displayed: Wizard Wars. The player is prompted with 3 menu items: 1 Player Game, 2 Player Game, and Exit. Since I'm targetting the PC platform, as a non-networked game to start, I picture myself running my mouse over each menu option and the visual element of the menu item changes, along with a sound to indicate that I am over a curent menu item. And as I move my mouse away, it changes back, and possibly an exit mouse sound. Maybe on the screen somewhere is a brazier alit with a magical tome open right beside it, OR, maybe the tome is the menu! I hear the menu music as mellow, not obtrusive or piercing. On a menu item select, a confirmation sound bellows to indicate the players selection. The Esc key will always return me to the previous screens or desktop. The menu screen must feel...dark, like a really important ritual is about to happen and thus the music should build up. 1 Player Game - > Customize Character(s) 2 Player Game - > Customize Character(s) Exit - > Back to Windows Notes: So the first thing I pick up here are a couple things: First and foremost, my artistic abilities suck crap, so I may have to hire an artist (now that i've said that, lets get techy) graphical objects will be positioned within a scene on each screen / window. Menu items will be represented grapically, possibly animated, and have sound / animation effects triggered by user input or a time line. I have an animated scene involving a brazier or fire on a stick IF I was to move this game to the xbox, I'd have to track which menu item is currently selected (unless I do a mouse pointer type thing.) WindowObject has a scene A Scene has many GameObjects GameObject has a position graphic or animation MenuObject is a GameObject which has a mouse in, mouse out, and click event which either does something graphically (animation), does something with sound, or moves to another screen.  Character Customisation Screen With either the 1 or 2 player option selected, both selections will come to this screen; a wizard requires a name, powers, and vestements of course! Player one will configure his character first and then player two. I considered a split screen for PC but to have two people fighting over a keyboard would probably suck. For XBox, a split screen could work; maybe when I get into the networking portion (phase 2 blog?) of this game I will remove the 2 player option for PC and provide only multiplayer and I will leave 2 player for xbox...hmm... Anyhow...I picture the creation process as follows: Name: (textbox / keyboard entry) - for xbox, this would have to be different. Robe Color: (color box, or something) Stats: Speed, Oomph, and Health. (as sliders) 1 as minimum and 10 as maximum. Ok, Back, and Cancel buttons / options. Each stat has a benefit which are listed below. The idea is the player decides if he wants his wizard to run fast, be a tank and ... hit with a purse.Regardless, the player will have a pool of 12 points to use. Ideally, A balanced wizard will have 5 in each attribute. Spells? The only spell of choice is a ball of fire which comes without question. The music and screen should still feel like a ritual. The Character Speed Basically, how fast your character moves and casts. Oomph (Best Monster Truck Voice): PURE POWAH!!! The damage output of your fireball. Health How much damage you can take. Notes: I realise the game dynamics may sound uninteresting at the moment; but I think after a couple releases, we could have some other grand ideas such as: saved profiles, gold to upgrade arsenal of spells, talents, etc...but for now...a vanilla fireball thrower mage will suffice for this experiment. OK. So... a MenuObject  may need to be loosely coupled to allow future items such as networking? may be a button? a CharacterObject has a name speed oomph health and a funky robe color. cap on the three stats (1-10) an arsenal of 1 spell (possibly could expand this) The Loading Screen As is. The Battleground Screen For now, I'm keeping the screen as max resolution for the PC. The screen isn't going to move or even be a split screen. I'm not aiming high here because I want to see what level of change is involved when new features / concepts are added to game content. I'm interested to find out if we could apply techniques such as MVC or MVVM to this type of development or is it too tightly coupled? This reminds me when when my best friend and I were brainstorming our game idea (this is going back a while...1994, 6?) and he cringed at the thought of bringing business technology into games, especially when I suggested a database to store character information and COM / DCOM as the medium, but it seems I wasn't far off (reflecting); just like his implementation of a xml "config file" for dynamic direct-x menus back before .net in 1999...anyhow...i digress... The Battle One screen, two characters lobing balls of fire at each other...It doesn't get better than that. Every so often a scroll appears...and the fireballs bounce off walls, or the wizard has rapid fire, or even scrolls of healing! The scroll options are endless. Two bars at the top, each the color of the wizard (with their name beside the bar) indicate how much health they have. Possibly the appearance of the scrolls means the battle is taking too long? I'm thinking 1 player controls: up, down, left, right and space to fire the button. Or even possibly, mouse click and shift - mouse button to fire a spell in the direction they are facing. Two player controls: a, s, d, f and space AND arrows (up, down, left, right) and Del key or Crtl. The game ends when a player has 0 health and a dialog box appears asking for a rematch / reconfigure / exit. Health goes down when a fireball (friendly or not), connects with a wizard. When a wizard connects with a scroll, a countdown clock / icon appears near the health bar and the wizard begins to glow. For the most part, a wizard can have only scroll 1 effect on him at a time. Notes: Ok, there's alot to cover here. a CharacterObject is a GameObject it travels at a set velocity it travels in a direction it has sounds (walking, running, casting, impact, dying, laughing, whistling, other?) it has animations (walking, running, casting, impact, dying, laughing, idle, other?) it has a lifespan (determined by health) it is alive or dead it has a position a ScrollObject is a GameObject it carries a transferance of points "damage" (or healing, bad scroll effect?) (determinde by caster) it carries a transferance of "other" it is stationary it has a sound on impact it has a stationary animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by game) it is alive or dead it has a position a WallObject is a GameObject it has a sound on fireball impact? it is a still image / stationary it has an impact animation / or transfers an impact animation it is dead it has a position A FireBall is a GameObject it carries a transferance of poinst "damage" (or healing, bad scroll effect?) (determinde by caster) it travels at a set velocity it travels in a direction it has a sound it has a travel animation it has an impact animation / or transfers an impact animation it has a fade animation? it has a lifespan (determined by caster) it is alive or dead it has a position As I look at this, I can see some common attributes in each object that I can carry up to the GameObject. I think I'm going to end the documentation here, it's taken me a bit of time to type this all out, tomorrow. I'll load up my IDE and my paint studio to get some good old fashioned cowboy hacking going!   D.

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Silverlight 3.0 Custom Cursor in Chart

    - by Wonko the Sane
    Hello All, I'm probably overlooking something that will be obvious when I see the solution, but for now... I am attempting to use a custom cursor inside the chart area of a Toolkit chart. I have created a ControlTemplate for the chart, and a grid to contain the cursors. I show/hide the cursors, and attempt to move the containing Grid, using various Mouse events. The cursor is being displayed at the correct times, but I cannot get it to move to the correct position. Here is the ControlTemplate (the funky colors are just attempts to confirm what the different pieces of the template pertain to): <dataVisTK:Title Content="{TemplateBinding Title}" Style="{TemplateBinding TitleStyle}"/> <Grid Grid.Row="1"> <!-- Remove the Legend --> <!--<dataVisTK:Legend x:Name="Legend" Title="{TemplateBinding LegendTitle}" Style="{TemplateBinding LegendStyle}" Grid.Column="1"/>--> <chartingPrimitivesTK:EdgePanel x:Name="ChartArea" Background="#EDAEAE" Style="{TemplateBinding ChartAreaStyle}" Grid.Column="0"> <Grid Canvas.ZIndex="-1" Background="#2008AE" Style="{TemplateBinding PlotAreaStyle}"> </Grid> <Border Canvas.ZIndex="1" BorderBrush="#FF250010" BorderThickness="3" /> <Grid x:Name="gridHandCursors" Canvas.ZIndex="5" Width="32" Height="32" Visibility="Collapsed"> <Image x:Name="cursorGrab" Width="32" Source="Resources/grab.png" /> <Image x:Name="cursorGrabbing" Width="32" Source="Resources/grabbing.png" Visibility="Collapsed"/> </Grid> </chartingPrimitivesTK:EdgePanel> </Grid> </Grid> </Border> and here are the mouse events (in particular, the MouseMove): void TimelineChart_Loaded(object sender, RoutedEventArgs e) { chartTimeline.UpdateLayout(); List<FrameworkElement> chartChildren = GetLogicalChildrenBreadthFirst(chartTimeline).ToList(); mChartArea = chartChildren.Where(element => element.Name.Equals("ChartArea")).FirstOrDefault() as Panel; if (mChartArea != null) { grabCursor = chartChildren.Where(element => element.Name.Equals("cursorGrab")).FirstOrDefault() as Image; grabbingCursor = chartChildren.Where(element => element.Name.Equals("cursorGrabbing")).FirstOrDefault() as Image; mGridHandCursors = chartChildren.Where(element => element.Name.Equals("gridHandCursors")).FirstOrDefault() as Grid; mChartArea.Cursor = Cursors.None; mChartArea.MouseMove += new MouseEventHandler(mChartArea_MouseMove); mChartArea.MouseLeftButtonDown += new MouseButtonEventHandler(mChartArea_MouseLeftButtonDown); mChartArea.MouseLeftButtonUp += new MouseButtonEventHandler(mChartArea_MouseLeftButtonUp); if (mGridHandCursors != null) { mChartArea.MouseEnter += (s, e2) => mGridHandCursors.Visibility = Visibility.Visible; mChartArea.MouseLeave += (s, e2) => mGridHandCursors.Visibility = Visibility.Collapsed; } } } void mChartArea_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { if (grabCursor != null) grabCursor.Visibility = Visibility.Visible; if (grabbingCursor != null) grabbingCursor.Visibility = Visibility.Collapsed; } void mChartArea_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { if (grabCursor != null) grabCursor.Visibility = Visibility.Collapsed; if (grabbingCursor != null) grabbingCursor.Visibility = Visibility.Visible; } void mChartArea_MouseMove(object sender, MouseEventArgs e) { if (mGridHandCursors != null) { Point pt = e.GetPosition(null); mGridHandCursors.SetValue(Canvas.LeftProperty, pt.X); mGridHandCursors.SetValue(Canvas.TopProperty, pt.Y); } } Any help past this roadblock would be greatly appreciated! Thanks, wTs

    Read the article

  • SOAP PHP Parsing Error?

    - by Josh
    I'm communicating with a SOAP service created with EJB -- it intermittently fails, and I've found a case where I can reliably reproduce. I'm getting a funky ass SOAP fault that says "looks like we got not XML," however, when retrieving the last response I get what is listed below (and what looks like valid XML to me). Any thoughts? Soap Fault: object(SoapFault)#2 (9) { ["message:protected"]=> string(33) "looks like we got no XML document" ["string:private"]=> string(0) "" ["code:protected"]=> int(0) ["file:protected"]=> string(40) "/Users/josh/Sites/blahblahblah/test-update.php" ["line:protected"]=> int(26) ["trace:private"]=> array(2) { [0]=> array(4) { ["function"]=> string(6) "__call" ["class"]=> string(10) "SoapClient" ["type"]=> string(2) "->" ["args"]=> array(2) { [0]=> string(24) "UpdateApplicationProfile" [1]=> array(1) { [0]=> array(2) { ["suid"]=> string(36) "62eb56ee-45de-4971-9234-54d72bbcd0e4" ["appid"]=> string(36) "6be2f269-4ddc-48af-9d47-30b7cf3d0499" } } } } [1]=> array(6) { ["file"]=> string(40) "/Users/josh/Sites/blahblahblah/test-update.php" ["line"]=> int(26) ["function"]=> string(24) "UpdateApplicationProfile" ["class"]=> string(10) "SoapClient" ["type"]=> string(2) "->" ["args"]=> array(1) { [0]=> array(2) { ["suid"]=> string(36) "62eb56ee-45de-4971-9234-54d72bbcd0e4" ["appid"]=> string(36) "6be2f269-4ddc-48af-9d47-30b7cf3d0499" } } } } ["faultstring"]=> string(33) "looks like we got no XML document" ["faultcode"]=> string(6) "Client" ["faultcodens"]=> string(41) "http://schemas.xmlsoap.org/soap/envelope/" } And the actual raw XML response using client-__getLastResponse(): <env:Envelope xmlns:env='http://schemas.xmlsoap.org/soap/envelope/'> <env:Header> </env:Header> <env:Body> <ns2:UpdateApplicationProfileResponse xmlns:ns2="blahblahblah"> <paramname>status</paramname> <paramname>location</paramname> <paramname>timezone</paramname> <paramname>homepage</paramname> <paramname>nickname</paramname> <paramname>firstName</paramname> <paramname>languages</paramname> <paramname>color</paramname> <paramname>lastName</paramname> <paramname>gender</paramname> <paramvalue></paramvalue> <paramvalue></paramvalue> <paramvalue></paramvalue> <paramvalue></paramvalue> <paramvalue>XXX XXX</paramvalue> <paramvalue>XXX</paramvalue> <paramvalue></paramvalue> <paramvalue>CA0008</paramvalue> <paramvalue>XXX</paramvalue> <paramvalue></paramvalue> </ns2:UpdateApplicationProfileResponse> </env:Body> </env:Envelope>

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • jScrollPane jEditable DOM problems

    - by Kyle Lafkoff
    Hello world, I am having a funky problem. See (this link won't disappear): www.skitzo.org/~el/bugjeditable.png for the firebug output screenshot. Here's my code. I run getJSON() to fetch the info from the PHP which pulls from DB and I fill a div with the result. I have jScrollPane and jEditable so a user can scroll down and click to edit any of the content. It works sometimes and then it doesn't work which makes me wonder if the browser is not interpreting the code properly or if I am misunderstanding fundamental DOM concepts here.... Here is a live current version of the code: http://www.musedates.com/testing.php $().ready(function() { $('#pane1').jScrollPane(); $('#tab_journal').tabs(); $('#tab2').load("/journal_new.php"); var i=0; var row = ''; var k, v, dt; $.getJSON("/ajax.php?j=22", function(data) { row = '<p>'; while(i<data.length) { $.each(data[i], function(k, v) { if (k == 'subject') { row += '<div style="font-size:1.5em; color:#000000;"><div class="editable" style="width:705px;" id="title-'+data[i].id+'">'+v+'</div></div>posted: '+dt+'<br />'; } else if (k == 'dt') { dt = v; } else if (k == 'msg') { row += '<div class="editableMsg" style="width:705px; height:40px;" id="msg-'+data[i].id+'">'+v+'</div></p>'; } }); i++; } $('#pane1').append(row).jScrollPane({scrollbarWidth:10, scrollbarMargin:10, showArrows:true}); }); $('.editable').livequery(function () { $('.editable').editable("/savejournal.php", { submitdata : function() { }, tooltip : 'Click to edit', indicator : '<img src="/UI/images/indicator.gif">', cancel : 'Cancel', submit : 'OK' }); $('.editableMsg').editable("/savejournal.php", { submitdata : function() { }, tooltip: 'Click to edit', indicator : '<img src="/UI/images/indicator.gif">', cancel : 'Cancel', submit : 'OK', type : 'textarea' }); $(".editable,.editableMsg").mouseover(function() { $(this).css('background-color', '#FDD017'); }); $(".editable,.editableMsg").mouseout(function() { $(this).css('background-color', '#fff'); }); }); }); And then the HTML: <div id="tab_container" style="margin:0px 0px 2px 8px;"> <ul id="tab_journal"> <li><a href="#tab1"><span>View / Edit</span></a></li> <li><a href="#tab2"><span>New Entry</span></a></li> </ul> </div> <div id="tab1" style="margin:0px 0px 0px 8px;"> <div id="pane1" class="scroll-pane super-wide"></div> </div> <div id="tab2" style="margin:0px 0px 0px 8px; width:700px;"></div> Thanks world.

    Read the article

  • Asp.net mvc application deployment / security issues

    - by WestDiscGolf
    I'll start with appologies; I wasn't sure if this was best posted here of Server Fault so if its in the wrong place then please move :-) Basic information I have written the first module of a new application at work. This is written using Visual Studio 2010, targetting .net 3.5 (at the moment) and asp.net mvc 2. This has been working fine during development running on the built in Development server from VS but however does not work once deployed to IIS 7/7.5. To deploy the application, I have built it in release mode and created a deployment package by right clicking on the project in the solution explorer (this will be done with an automated build in tfs once upgrade from the beta). This has then been imported into IIS on the server. The application is using windows/domain authentication. Issue #1 I can fire up internet explorer and browse to the application from a client computer as well as on a remote desktop connection. I can execute the code which reads/stores data in Session fine from the IE instance on the remote desktop but if I browse to it from the client pc it seems to lose the session state. I click on the form submit and the page refreshes and doesn't execute the required code. I've tried setting with; InProc, SQLServer and StateServer. but with no luck :-( Issue #2 As part of the application it views PDF and Tiff documents on the fly which are on a network share on the office network and creates thumbnails if the document hasn't been viewed before. This works if running on the machine the application is deployed to; however when browsing from a client pc I get an error saying: Access to the path '\\fileserver\folder\file.tif' is denied Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Access to the path '\\fileserver\folder\file.TIF' is denied. ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. If the application is impersonating via , the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user. As this is on a different server the user is not accessible. To get round this I have tried: 1 - setting the application pool to run as domain administrator (I know this is a security risk, but I'm just trying to get it to work at the moment!) 2 - to set the log on account for World Wide Web Publishing service to be the domain admin . When trying to restart the service I get ... Windows could not start the World Wide Web Publishing Service service on the Local Computer. Error 1079: The account specified for this service is different from the account specified fro the other services running in the same process. Any pointers/help would be much appriciated as I'm pulling my hair out (of what little I have left). Update I've been using this funky little tool I found - DelegConfig v2 beta (Delegation / Kerberos Configuration Tool). This has been really usefull. So I've got the accessing of the file share working (there is a test page which will read the files) so now I've just got the issue of passing through the users credentials through to the SQL Server (wans't my choice to do it this way!!) to execute the queries etc. but I can't get it to log on as the user. It tries to access it as "NT Authority\Network Service" which doesn't have a sql login (as should be the logged on user). My connection string is: <add name="User" connectionString="Data Source=.;Integrated Security=True" providerName="System.Data.SqlClient" /> No initial catalog is specified as the system is over multiple dbs (also wasn't my choice!!). I really appriciate all the help so far! :-) Any further hints?!

    Read the article

  • How can I get my Web API app to run again after upgrading to MVC 5 and Web API 2?

    - by Clay Shannon
    I upgraded my Web API app to the funkelnagelneu versions using this guidance: http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2 However, after going through the steps (it seems all this should be automated, anyway), I tried to run it and got, "A project with an Output Type of Class Library cannot be started directly" What in Sam Hills Brothers Coffee is going on here? Who said this was a class library? So I opened Project Properties, and changed it (it was marked as "Class Library" for some reason - it either wasn't yesterday, or was and worked fine) to an Output Type of "Windows Application" ("Console Application" and "Class Library" being the only other options). Now it won't compile, complaining: "*Program 'c:\Platypus_Server_WebAPI\PlatypusServerWebAPI\PlatypusServerWebAPI\obj\Debug\PlatypusServerWebAPI.exe' does not contain a static 'Main' method suitable for an entry point...*" How can I get my Web API app back up and running in view of this quandary? UPDATE Looking in packages.config, two entries seem chin-scratch-worthy: <package id="Microsoft.AspNet.Providers" version="1.2" targetFramework="net40" /> <package id="Microsoft.Web.Infrastructure" version="1.0.0.0" targetFramework="net40" /> All the others target net451. Could this be the problem? Should I remove these packages? UPDATE 2 I tried to uninstall the Microsoft.Web.Infrastructure package (its description leads me to believe I don't need it; also, it has no dependencies) via the NuGet package manager, but it tells me, "NuGet failed to install or uninstall the selected package in the following project(s). [mine]" UPDATE 3 I went through the steps in again, and found that I had missed one step. I had to change this entry in the Application web.config File : <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-5.0.0.0" newVersion="5.0.0.0" /> </dependentAssembly> (from "4.0.0.0" to "5.0.0.0") ...but I still get the same result - it rebuilds/compiles, but won't run, with "A project with an Output Type of Class Library cannot be started directly" UPDATE 4 Thinking about the err msg, that it can't directly open a class library, I thought, "Sure you can't/won't -- this is a web app, not a project. So I followed a hunch, closed the project, and reopened it as a website (instead of reopening a project). That has gotten me further, at least; now I see a YSOD: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. UPDATE 5 Note: The project is now (after being opened as a web site) named "localhost_48614" And...there is no longer a "References" folder...?!?!? As to that YSOD I'm getting, the official instructions (http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2) said to do this, and I quote: "Update all elements that contain “System.Web.WebPages.Razor” from version “2.0.0.0” to version“3.0.0.0”." UPDATE 6 When I select Tools Library Package Manager Manage NuGet Packages for Solution now, I get, "Operation failed. Unable to locate the solution directory. Please ensure that the solution has been saved." So I save it, and it saves it with this funky new name (C:\Users\clay\Documents\Visual Studio 2013\Projects\localhost_48614\localhost_48614.sln) I get the Yellow Strip of Enlightenment across the top of the NuGet Package Manager telling me, "Some NuGet packages are missing from this solution. Click to restore from your online package sources." I do (click the "Restore" button, that is), and it downloads the missing packages ... I end up with the 30 packages. I try to run the app/site again, and ... the erstwhile YSOD becomes a compilation error: The pre-application start initialization method Start on type System.Web.Mvc.PreApplicationStartCode threw an exception with the following error message: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.. Argghhhh!!! (and it's not even talk-like-a-pirate day).

    Read the article

  • Threading across multiple files

    - by Zach M.
    My program is reading in files and using thread to compute the highest prime number, when I put a print statement into the getNum() function my numbers are printing out. However, it seems to just lag no matter how many threads I input. Each file has 1 million integers in it. Does anyone see something apparently wrong with my code? Basically the code is giving each thread 1000 integers to check before assigning a new thread. I am still a C noobie and am just learning the ropes of threading. My code is a mess right now because I have been switching things around constantly. #include <stdio.h> #include <stdlib.h> #include <time.h> #include <string.h> #include <pthread.h> #include <math.h> #include <semaphore.h> //Global variable declaration char *file1 = "primes1.txt"; char *file2 = "primes2.txt"; char *file3 = "primes3.txt"; char *file4 = "primes4.txt"; char *file5 = "primes5.txt"; char *file6 = "primes6.txt"; char *file7 = "primes7.txt"; char *file8 = "primes8.txt"; char *file9 = "primes9.txt"; char *file10 = "primes10.txt"; char **fn; //file name variable int numberOfThreads; int *highestPrime = NULL; int fileArrayNum = 0; int loop = 0; int currentFile = 0; sem_t semAccess; sem_t semAssign; int prime(int n)//check for prime number, return 1 for prime 0 for nonprime { int i; for(i = 2; i <= sqrt(n); i++) if(n % i == 0) return(0); return(1); } int getNum(FILE* file) { int number; char* tempS = malloc(20 *sizeof(char)); fgets(tempS, 20, file); tempS[strlen(tempS)-1] = '\0'; number = atoi(tempS); free(tempS);//free memory for later call return(number); } void* findPrimality(void *threadnum) //main thread function to find primes { int tNum = (int)threadnum; int checkNum; char *inUseFile = NULL; int x=1; FILE* file; while(currentFile < 10){ if(inUseFile == NULL){//inUseFIle being used to check if a file is still being read sem_wait(&semAccess);//critical section inUseFile = fn[currentFile]; sem_post(&semAssign); file = fopen(inUseFile, "r"); while(!feof(file)){ if(x % 1000 == 0 && tNum !=1){ //go for 1000 integers and then wait sem_wait(&semAssign); } checkNum = getNum(file); /* * * * * I think the issue is here * * * */ if(checkNum > highestPrime[tNum]){ if(prime(checkNum)){ highestPrime[tNum] = checkNum; } } x++; } fclose(file); inUseFile = NULL; } currentFile++; } } int main(int argc, char* argv[]) { if(argc != 2){ //checks for number of arguements being passed printf("To many ARGS\n"); return(-1); } else{//Sets thread cound to user input checking for correct number of threads numberOfThreads = atoi(argv[1]); if(numberOfThreads < 1 || numberOfThreads > 10){ printf("To many threads entered\n"); return(-1); } time_t preTime, postTime; //creating time variables int i; fn = malloc(10 * sizeof(char*)); //create file array and initialize fn[0] = file1; fn[1] = file2; fn[2] = file3; fn[3] = file4; fn[4] = file5; fn[5] = file6; fn[6] = file7; fn[7] = file8; fn[8] = file9; fn[9] = file10; sem_init(&semAccess, 0, 1); //initialize semaphores sem_init(&semAssign, 0, numberOfThreads); highestPrime = malloc(numberOfThreads * sizeof(int)); //create an array to store each threads highest number for(loop = 0; loop < numberOfThreads; loop++){//set initial values to 0 highestPrime[loop] = 0; } pthread_t calculationThread[numberOfThreads]; //thread to do the work preTime = time(NULL); //start the clock for(i = 0; i < numberOfThreads; i++){ pthread_create(&calculationThread[i], NULL, findPrimality, (void *)i); } for(i = 0; i < numberOfThreads; i++){ pthread_join(calculationThread[i], NULL); } for(i = 0; i < numberOfThreads; i++){ printf("this is a prime number: %d \n", highestPrime[i]); } postTime= time(NULL); printf("Wall time: %ld seconds\n", (long)(postTime - preTime)); } } Yes I am trying to find the highest number over all. So I have made some head way the last few hours, rescucturing the program as spudd said, currently I am getting a segmentation fault due to my use of structures, I am trying to save the largest individual primes in the struct while giving them the right indices. This is the revised code. So in short what the first thread is doing is creating all the threads and giving them access points to a very large integer array which they will go through and find prime numbers, I want to implement semaphores around the while loop so that while they are executing every 2000 lines or the end they update a global prime number. #include <stdio.h> #include <stdlib.h> #include <time.h> #include <string.h> #include <pthread.h> #include <math.h> #include <semaphore.h> //Global variable declaration char *file1 = "primes1.txt"; char *file2 = "primes2.txt"; char *file3 = "primes3.txt"; char *file4 = "primes4.txt"; char *file5 = "primes5.txt"; char *file6 = "primes6.txt"; char *file7 = "primes7.txt"; char *file8 = "primes8.txt"; char *file9 = "primes9.txt"; char *file10 = "primes10.txt"; int numberOfThreads; int entries[10000000]; int entryIndex = 0; int fileCount = 0; char** fileName; int largestPrimeNumber = 0; //Register functions int prime(int n); int getNum(FILE* file); void* findPrimality(void *threadNum); void* assign(void *num); typedef struct package{ int largestPrime; int startingIndex; int numberCount; }pack; //Beging main code block int main(int argc, char* argv[]) { if(argc != 2){ //checks for number of arguements being passed printf("To many threads!!\n"); return(-1); } else{ //Sets thread cound to user input checking for correct number of threads numberOfThreads = atoi(argv[1]); if(numberOfThreads < 1 || numberOfThreads > 10){ printf("To many threads entered\n"); return(-1); } int threadPointer[numberOfThreads]; //Pointer array to point to entries time_t preTime, postTime; //creating time variables int i; fileName = malloc(10 * sizeof(char*)); //create file array and initialize fileName[0] = file1; fileName[1] = file2; fileName[2] = file3; fileName[3] = file4; fileName[4] = file5; fileName[5] = file6; fileName[6] = file7; fileName[7] = file8; fileName[8] = file9; fileName[9] = file10; FILE* filereader; int currentNum; for(i = 0; i < 10; i++){ filereader = fopen(fileName[i], "r"); while(!feof(filereader)){ char* tempString = malloc(20 *sizeof(char)); fgets(tempString, 20, filereader); tempString[strlen(tempString)-1] = '\0'; entries[entryIndex] = atoi(tempString); entryIndex++; free(tempString); } } //sem_init(&semAccess, 0, 1); //initialize semaphores //sem_init(&semAssign, 0, numberOfThreads); time_t tPre, tPost; pthread_t coordinate; tPre = time(NULL); pthread_create(&coordinate, NULL, assign, (void**)numberOfThreads); pthread_join(coordinate, NULL); tPost = time(NULL); } } void* findPrime(void* pack_array) { pack* currentPack= pack_array; int lp = currentPack->largestPrime; int si = currentPack->startingIndex; int nc = currentPack->numberCount; int i; int j = 0; for(i = si; i < nc; i++){ while(j < 2000 || i == (nc-1)){ if(prime(entries[i])){ if(entries[i] > lp) lp = entries[i]; } j++; } } return (void*)currentPack; } void* assign(void* num) { int y = (int)num; int i; int count = 10000000/y; int finalCount = count + (10000000%y); int sIndex = 0; pack pack_array[(int)num]; pthread_t workers[numberOfThreads]; //thread to do the workers for(i = 0; i < y; i++){ if(i == (y-1)){ pack_array[i].largestPrime = 0; pack_array[i].startingIndex = sIndex; pack_array[i].numberCount = finalCount; } pack_array[i].largestPrime = 0; pack_array[i].startingIndex = sIndex; pack_array[i].numberCount = count; pthread_create(&workers[i], NULL, findPrime, (void *)&pack_array[i]); sIndex += count; } for(i = 0; i< y; i++) pthread_join(workers[i], NULL); } //Functions int prime(int n)//check for prime number, return 1 for prime 0 for nonprime { int i; for(i = 2; i <= sqrt(n); i++) if(n % i == 0) return(0); return(1); }

    Read the article

  • Did you miss the OFM Summer Camps III? Get access to the b2b & adapters and SOA Governance training material

    - by JuergenKress
    We posted the SOA Governance and b2b & adapters training material at our SOA Community Workspace (SOA Community membership required). We have no plans to post the ACM and Advanced SOA training material. Special thanks to all the trainers who delivered superb workshops. Thanks to all the partners who invested time and utilization plus travel expenses to attend the camp. Special thanks to all the international partners who traveled a long way to sunny Lisbon – including our Mexican friends! The Summer Camp feedback was excellent, everybody answered the question if he would attend a future OFM Summer Camp with YES and the overall feedback is 4,79 out of 5 (best)! For most of the trainings we had a waiting list with additional partners who want to attend. Make sure you use your middleware skills to deliver successful projects. It would be great if you can support your colegues and the community by sharing the lessons learned and best practice. Thanks for great feedback at twitter please continue to send your pictures to our twitter feed @soacommunity #OFMsummercamps or post them at our Facebook page. Here is a selection of some tweets: Walter Montantes ?México presence en #OFMSummerCamp Lisboa 2013 cc @soacommunity @AdquemTI pic.twitter.com/9NEFwsWCAq SOA Community ?thanks for attending the #OFMSummercamp - save trip home ;-) Want to attend a future training register http://www.oracle.com/goto/emea/soa #soacommunity C2B2 Consulting ?Last day at the #OFMSummercamp Oracle SOA Suite Training in Portugal @soacommunity pic.twitter.com/6LZavVlvHc Patrick Sinke ?a FollowFriday for @Oracle_B2B because 19 followers is not enough #FF #OFMSummercamp Patrick Sinke ?Yogesh Sontakke is talking about #SOA #Governance. #OFMSummercamp Nuno Cancelo ?Oracle SOA Governance - Quick Overview #OFMSummerCamp Nuno Cancelo ?Last coffee break. #OFMSummercamp pic.twitter.com/xZi9M5vAWz Scott Haaland Last day of #OFMSummercamp. It's been a great productive week..great students eager to learn. @Oracle_B2B @soacommunity . Patrick Sinke ?singletons are used to retain specific fetching order of files and records in multithread/multi-instance environment. #OFMSummercamp #SOA Patrick Sinke ?SOA's File Adapter is extremely versatile: It writes, reads and converts almost any type of file. #OFMSummercamp pic.twitter.com/XjtJF9Y5SH Patrick Sinke ?Now deep-diving into Java EE Connector Architecture (JCA). Got to do some catching up at home on this subject. #OFMSummercamp #SOA Patrick Sinke ?Today we start with security and OPSS at #OFMSummercamp Advanced #SOA training. Then some #OSB. #OFM #Oracle #whitehorses Remco Cats ?Starting the last day on #OFMSummercamp building ADF Mobile applications with BPM Nuno Cancelo ?While attending #OFMSummerCamp i notice even more the importance of designing software. Any tips in how to become an software architect? Patrick Sinke ?Extensive information on Faullt handling and policies now in Advanced #SOA track. #OFMSummercamp #oraclesoa #middleware #whitehorses C2B2 Consulting ?Geoffroy de Lamalle speaking at the #OFMSummercamp @soacommunity pic.twitter.com/m4oOyzYB2q Patrick Sinke ?Oracle Document editor is a huge tool (6GB), but contains every version and subset of EDI, HL7, etc definitions. Impressive. #OFMSummercamp Patrick Sinke ?Oracle #B2B 11g presentation on #OFMSummercamp by Scott. Unfortunately only 2 hours in SOA advanced class. Very interesting. SOA Community Bon dia #OFMSummercamp - if you are here in sunny Lisbon ;-) you can checkin at http://foursquare.com/ #soacommunity pic.twitter.com/PnmudJgJTZ Nuno Cancelo ?Beautiful day! #OFMSummercamp pic.twitter.com/nwByRM5YE1 Nuno Cancelo ?Relaxing after lunch :-) #OFMSummercamp pic.twitter.com/hOJzebCM5p SOA Community Posted pictures from OFM Summer Camp III at our facebook page - share yours! https://www.facebook.com/soacommunity #OFMSummerCamp #soacommunity Nuno Cancelo ?Coffee break: day3 #OFMSummercamp pic.twitter.com/97n1sAGhx4 Patrick Sinke #OFMSummercamp day 3; SOA Infrastructure. pic.twitter.com/ziivyw3L6q SOA Community ?@soacommunity 28 Aug Bon dia day 3 at #OFMSummercamp in Lisboa. Nial presenting ACM roadmap pic.twitter.com/iN3gTCHSbA SOA Community ?Hands-on time at the b2b & adapters training part of the #OFMSummercamp #soacommunity pic.twitter.com/9BzI7igrX8 SOA Community ?Laptop replacement at #OFMSummercamp - big thanks to Oracle Portugal for the fast help! 10 seconds to cut the cable pic.twitter.com/nwd2Px73pa SOA Community ?Hard work long training until 18.00 now enjoy the beach #ofmsummercamp #soacommunity pic.twitter.com/StogfxJNFH Walter Montantes? Primer día #OFMSummercamp pic.twitter.com/cTNDpzg5pL Miguel Delgadillo ?@walex86 Advanced SOA training by Geoffroy at #OFMSummercamp - full room hard working class pic.twitter.com/2SDz9FVhkh” si le sabes? SOA Community ?Welcome to the #OFMSummercamp in sunny Lisbon ;-) Send us your pictures of the training and city @soacommunity pic.twitter.com/i2ErZaaFbb SOA Community ?Advanced SOA training by Geoffroy at #OFMSummercamp - full room hard working class pic.twitter.com/uKjv0tV2bO Nuno Cancelo #OFMSummercamp afternoon break:) pic.twitter.com/pUaBvt2NIj Impressions of the event are posted at our facebook page. If you missed Lisbon, make sure you attend one of our Additional Middleware Trainings in Europe: We currently run 3 different training roadshows for Business Process Management & ADF & WebLogic across Europe make sure you sing-up for them: ADF & ADF Mobile or Business Process Management Suite or WebLogic Suite. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: b2b,training,education,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • 11gR2 ??????????

    - by Allen Gao
    ???????11gR2 GI????????????????????,?10g????,???????GI?????????????1.Ocssd.bin:????????10g??????????,???????(Node Monitoring)????(Group Management)?????????????“??????????”????????2.Cssdagent.bin/Cssdmonitor.bin:?2????11gR2??????????????ocssd.bin??????(Local HeartBeat),??????1??????????????????ocssd.bin???????????,????????ocssd.bin????????,??????,???????????10g??oclsomon/oclsvmon(?????????????)?oprocd????,????11gR2???????—rebootless restart,?????????11.2.0.2????????????,????????????(????????)??????ocssd.bin?????,??????????????,??????????GI stack?????,??GI stack??????????(short disk I/O timeout)??graceful shutdown,????????,??,????????????????????????11gR2 ??????????????1.Ocssd.log2.Cssdagent ? cssdmonitor logs<GI_home>/log/<node_name>/agent/ohasd/oracssdagent_root/oracssdagent_root.log<GI_home>/log/<node_name>/agent/ohasd/oracssdmonitor_root_root/oracssdmonitor_root.log3.Cluster alert log<GI_home>/log/<node_name>/alert<node_name>.log4.OS log5.OSW ?? CHM ????,??????????????????1.???????????????????????????????,??????10g???????????????????????????GI alert log ??,?????node2?2012-08-15 16:30:06.554 [cssd(11011) ]CRS-1612:Network communication with node node1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.510 seconds2012-08-15 16:30:13.586 [cssd(11011) ]CRS-1611:Network communication with node node1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.470 seconds2012-08-15 16:30:18.606 [cssd(11011) ]CRS-1610:Network communication with node node1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 2.450 seconds2012-08-15 16:30:21.073 [cssd(11011) ]CRS-1632:Node node1 is being removed from the cluster in cluster incarnation 2363798322012-08-15 16:30:21.086 [cssd(11011) ]CRS-1601:CSSD Reconfiguration complete. Active nodes are node2 .?????????????node1?????????????????,???????, node2?? node1 ?????????node1 ???,???node1 ???????????????(????os log ??OSW ????),??node1 ???????node2??node1?????????,????node1??????????,???reconfiguration,????????????,????????????,?11.2.0.2??,??rebootless restart???,node eviction ????????GI stack??,????????????,???node2?node1?????????,node1?ocssd.bin??????(????ocssd.log??)??node1???????????????,??node1??????GI node eviction????2.???????????????,?????10g???????,???????????3.??ocssd.bin ????Cssdagent/Cssdmonitor.bin????????????,??????,????,????oracssdagent_root.log ?oracssdmonitor_root.log ????????2012-07-23 14:09:58.506: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 75% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 21091 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……2012-07-23 14:10:02.704: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 90% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 25291 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……???????????????timeout???28 ???(misscount – reboot time)?4.?????????????????? ??????????????????????,????ocssd.bin??????,?????????????,?????????????ocssd.bin??,????????os???????????OSW??,???? ??????? cpu ???Linux OSWbb v5.0 node1SNAP_INTERVAL 30CPU_COUNT 8OSWBB_ARCHIVE_DEST /osw/archiveprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa……zzz ***Mon Aug 30 17:55:21 CST 2012158  6 4200956 923940   7664 19088464    0    0  1296  3574 11153 231579  0 100  0  0  0zzz ***Mon Aug 30 17:55:53 CST 2012135  4 4200956 923760   7812 19089344    0    0     4    45  570 14563  0 100  0  0  0zzz ***Mon Aug 30 17:56:53 CST 2012126  2 4200956 923784   8396 19083620    0    0   196  1121  651 15941  2 98  0  0  0?????????????,????10g??????11gR2????????????????,??????,????????Note 1050693.1 : Troubleshooting 11.2 Clusterware Node Evictions (Reboots)

    Read the article

  • Building a jQuery Plug-in to make an HTML Table scrollable

    - by Rick Strahl
    Today I got a call from a customer and we were looking over an older application that uses a lot of tables to display financial and other assorted data. The application is mostly meta-data driven with lots of layout formatting automatically driven through meta data rather than through explicit hand coded HTML layouts. One of the problems in this apps are tables that display a non-fixed amount of data. The users of this app don't want to use paging to see more data, but instead want to display overflow data using a scrollbar. Many of the forms are very densely populated, often with multiple data tables that display a few rows of data in the UI at the most. This sort of layout does not lend itself well to paging, but works much better with scrollable data. Unfortunately scrollable tables are not easily created. HTML Tables are mangy beasts as anybody who's done any sort of Web development knows. Tables are finicky when it comes to styling and layout, and they have many funky quirks, especially when it comes to scrolling both of the table rows themselves or even the child columns. There's no built-in way to make tables scroll and to lock headers while you do, and while you can embed a table (or anything really) into a scrolling div with something like this: <div style="position:relative; overflow: hidden; overflow-y: scroll; height: 200px; width: 400px;"> <table id="table" style="width: 100%" class="blackborder" > <thead> <tr class="gridheader"> <th>Column 1</th> <th>Column 2</th> <th>Column 3</th> <th >Column 4</th> </tr> </thead> <tbody> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> <tr> <td>Column 1 Content</td> <td>Column 2 Content</td> <td>Column 3 Content</td> <td>Column 4 Content</td> </tr> … </tbody> </table> </div> </div> that won't give a very satisfying visual experience: Both the header and body scroll which looks odd. You lose context as soon as the header scrolls off the top and when you reach the bottom of the list the bottom outline of the table shows which also looks off. The the side bar shows all the way down the length of the table yet another visual miscue. In a pinch this will work, but it's ugly. What's out there? Before we go further here you should know that there are a few capable grid plug-ins out there already. Among them: Flexigrid (can work of any table as well as with AJAX data) jQuery Scrollable Table Plug-in (feature similar to what I need but not quite) jqGrid (mostly an Ajax Grid which is very powerful and works very well) But in the end none of them fit the bill of what I needed in this situation. All of these require custom CSS and some of them are fairly complex to restyle. Others are AJAX only or work better with AJAX loaded data. However, I need to actually try (as much as possible) to maintain the original styling of the tables without requiring extensive re-styling. Building the makeTableScrollable() Plug-in To make a table scrollable requires rearranging the table a bit. In the plug-in I built I create two <div> tags and split the table into two: one for the table header and one for the table body. The bottom <div> tag then contains only the table's row data and can be scrolled while the header stays fixed. Using jQuery the basic idea is pretty simple: You create the divs, copy the original table into the bottom, then clone the table, clear all content append the <thead> section, into new table and then copy that table into the second header <div>. Easy as pie, right? Unfortunately it's a bit more complicated than that as it's tricky to get the width of the table right to account for the scrollbar (by adding a small column) and making sure the borders properly line up for the two tables. A lot of style settings have to be made to ensure the table is a fixed size, to remove and reattach borders, to add extra space to allow for the scrollbar and so forth. The end result of my plug-in is a table with a scrollbar. Using the same table I used earlier the result looks like this: To create it, I use the following jQuery plug-in logic to select my table and run the makeTableScrollable() plug-in against the selector: $("#table").makeTableScrollable( { cssClass:"blackborder"} ); Without much further ado, here's the short code for the plug-in: (function ($) { $.fn.makeTableScrollable = function (options) { return this.each(function () { var $table = $(this); var opt = { // height of the table height: "250px", // right padding added to support the scrollbar rightPadding: "10px", // cssclass used for the wrapper div cssClass: "" } $.extend(opt, options); var $thead = $table.find("thead"); var $ths = $thead.find("th"); var id = $table.attr("id"); var cssClass = $table.attr("class"); if (!id) id = "_table_" + new Date().getMilliseconds().ToString(); $table.width("+=" + opt.rightPadding); $table.css("border-width", 0); // add a column to all rows of the table var first = true; $table.find("tr").each(function () { var row = $(this); if (first) { row.append($("<th>").width(opt.rightPadding)); first = false; } else row.append($("<td>").width(opt.rightPadding)); }); // force full sizing on each of the th elemnts $ths.each(function () { var $th = $(this); $th.css("width", $th.width()); }); // Create the table wrapper div var $tblDiv = $("<div>").css({ position: "relative", overflow: "hidden", overflowY: "scroll" }) .addClass(opt.cssClass); var width = $table.width(); $tblDiv.width(width).height(opt.height) .attr("id", id + "_wrapper") .css("border-top", "none"); // Insert before $tblDiv $tblDiv.insertBefore($table); // then move the table into it $table.appendTo($tblDiv); // Clone the div for header var $hdDiv = $tblDiv.clone(); $hdDiv.empty(); var width = $table.width(); $hdDiv.attr("style", "") .css("border-bottom", "none") .width(width) .attr("id", id + "_wrapper_header"); // create a copy of the table and remove all children var $newTable = $($table).clone(); $newTable.empty() .attr("id", $table.attr("id") + "_header"); $thead.appendTo($newTable); $hdDiv.insertBefore($tblDiv); $newTable.appendTo($hdDiv); $table.css("border-width", 0); }); } })(jQuery); Oh sweet spaghetti code :-) The code starts out by dealing the parameters that can be passed in the options object map: height The height of the full table/structure. The height of the outside wrapper container. Defaults to 200px. rightPadding The padding that is added to the right of the table to account for the scrollbar. Creates a column of this width and injects it into the table. If too small the rightmost column might get truncated. if too large the empty column might show. cssClass The CSS class of the wrapping container that appears to wrap the table. If you want a border around your table this class should probably provide it since the plug-in removes the table border. The rest of the code is obtuse, but pretty straight forward. It starts by creating a new column in the table to accommodate the width of the scrollbar and avoid clipping of text in the rightmost column. The width of the columns is explicitly set in the header elements to force the size of the table to be fixed and to provide the same sizing when the THEAD section is moved to a new copied table later. The table wrapper div is created, formatted and the table is moved into it. The new wrapper div is cloned for the header wrapper and configured. Finally the actual table is cloned and cleared of all elements. The original table's THEAD section is then moved into the new table. At last the new table is added to the header <div>, and the header <div> is inserted before the table wrapper <div>. I'm always amazed how easy jQuery makes it to do this sort of re-arranging, and given of what's happening the amount of code is rather small. Disclaimer: Your mileage may vary A word of warning: I make no guarantees about the code above. It's a first cut and I provided this here mainly to demonstrate the concepts of decomposing and reassembling an HTML layout :-) which jQuery makes so nice and easy. I tested this component against the typical scenarios we plan on using it for which are tables that use a few well known styles (or no styling at all). I suspect if you have complex styling on your <table> tag that things might not go so well. If you plan on using this plug-in you might want to minimize your styling of the table tag and defer any border formatting using the class passed in via the cssClass parameter, which ends up on the two wrapper div's that wrap the header and body rows. There's also no explicit support for footers. I rarely if ever use footers (when not using paging that is), so I didn't feel the need to add footer support. However, if you need that it's not difficult to add - the logic is the same as adding the header. The plug-in relies on a well-formatted table that has THEAD and TBODY sections along with TH tags in the header. Note that ASP.NET WebForm DataGrids and GridViews by default do not generate well-formatted table HTML. You can look at my Adding proper THEAD sections to a GridView post for more info on how to get a GridView to render properly. The plug-in has no dependencies other than jQuery. Even with the limitations in mind I hope this might be useful to some of you. I know I've already identified a number of places in my own existing applications where I will be plugging this in almost immediately. Resources Download Sample and Plug-in code Latest version in the West Wind Web & AJAX Toolkit Repository © Rick Strahl, West Wind Technologies, 2005-2011Posted in jQuery  HTML  ASP.NET  

    Read the article

  • Using Image Source with big images in WPF

    - by xyzzer
    I am working on an application that allows users to manipulate multiple images by using ItemsControl. I started running some tests and found that the app has problems displaying some big images - ie. it did not work with the high resolution (21600x10800), 20MB images from http://earthobservatory.nasa.gov/Features/BlueMarble/BlueMarble_monthlies.php, though it displays the 6200x6200, 60MB Hubble telescope image from http://zebu.uoregon.edu/hudf/hudf.jpg just fine. The original solution just specified an Image control with a Source property pointing at a file on a disk (through a binding). With the Blue Marble file - the image would just not show up. Now this could be just a bug hidden somewhere deep in the funky MVVM + XAML implementation - the visual tree displayed by Snoop goes like: Window/Border/AdornerDecorator/ContentPresenter/Grid/Canvas/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Border/Grid/ContentPresenter/UserControl/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Viewbox/ContainerVisual/UserControl/Border/ContentPresenter/Grid/Grid/ItemsControl/Border/ItemsPresenter/Canvas/ContentPresenter/Grid/Grid/ContentPresenter/Image... Now debug this! WPF can be crazy like that... Anyway, it turned out that if I create a simple WPF application - the images load just fine. I tried finding out the root cause, but I don't want to spend weeks on it. I figured the right thing to do might be to use a converter to scale the images down - this is what I have done: ImagePath = @"F:\Astronomical\world.200402.3x21600x10800.jpg"; TargetWidth = 2800; TargetHeight = 1866; and <Image> <Image.Source> <MultiBinding Converter="{StaticResource imageResizingConverter}"> <MultiBinding.Bindings> <Binding Path="ImagePath"/> <Binding RelativeSource="{RelativeSource Self}" /> <Binding Path="TargetWidth"/> <Binding Path="TargetHeight"/> </MultiBinding.Bindings> </MultiBinding> </Image.Source> </Image> and public class ImageResizingConverter : MarkupExtension, IMultiValueConverter { public Image TargetImage { get; set; } public string SourcePath { get; set; } public int DecodeWidth { get; set; } public int DecodeHeight { get; set; } public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { this.SourcePath = values[0].ToString(); this.TargetImage = (Image)values[1]; this.DecodeWidth = (int)values[2]; this.DecodeHeight = (int)values[3]; return DecodeImage(); } private BitmapImage DecodeImage() { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.DecodePixelWidth = (int)DecodeWidth; bi.DecodePixelHeight = (int)DecodeHeight; bi.UriSource = new Uri(SourcePath); bi.EndInit(); return bi; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } public override object ProvideValue(IServiceProvider serviceProvider) { return this; } } Now this works fine, except for one "little" problem. When you just specify a file path in Image.Source - the application actually uses less memory and works faster than if you use BitmapImage.DecodePixelWidth. Plus with Image.Source if you have multiple Image controls that point to the same image - they only use as much memory as if only one image was loaded. With the BitmapImage.DecodePixelWidth solution - each additional Image control uses more memory and each of them uses more than when just specifying Image.Source. Perhaps WPF somehow caches these images in compressed form while if you specify the decoded dimensions - it feels like you get an uncompressed image in memory, plus it takes 6 times the time (perhaps without it the scaling is done on the GPU?), plus it feels like the original high resolution image also gets loaded and takes up space. If I just scale the image down, save it to a temporary file and then use Image.Source to point at the file - it will probably work, but it will be pretty slow and it will require handling cleanup of the temporary file. If I could detect an image that does not get loaded properly - maybe I could only scale it down if I need to, but Image.ImageFailed never gets triggered. Maybe it has something to do with the video memory and this app just using more of it with the deep visual tree, opacity masks etc. Actual question: How can I load big images as quickly as Image.Source option does it, without using more memory for additional copies and additional memory for the scaled down image if I only need them at a certain resolution lower than original? Also, I don't want to keep them in memory if no Image control is using them anymore.

    Read the article

  • Android relative layout problem

    - by DixieFlatline
    Hello! I just can't display the button at the bottom of the screen. I have 2 textviews and 2 edittexts set as gone, and they become visible when user clicks checkbox. It's only then that my button gets positioned properly Otherwise it sits on top of the app at launch of my app.(see here: http://www.shrani.si/f/3V/10G/4nnH4DtV/layoutproblem.png) I also tried android:layout_alignParentBottom="true" but that doesnt help either. <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/scrollview" android:layout_width="fill_parent" android:layout_height="wrap_content"> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView android:id="@+id/txt1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Krajevna skupnost: " android:layout_alignParentTop="true" /> <Spinner android:id="@+id/spinner1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:prompt="@string/ks_prompt" android:layout_below="@id/txt1" /> <TextView android:id="@+id/txt2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Zadeva: " android:layout_below="@id/spinner1" /> <Spinner android:id="@+id/spinner2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:prompt="@string/pod_prompt" android:layout_below="@id/txt2" /> <TextView android:id="@+id/tv1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Zadeva: " android:layout_below="@id/spinner2" /> <EditText android:id="@+id/prvi" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/tv1" /> <TextView android:id="@+id/tv2" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/prvi" android:text="Vsebina: " /> <EditText android:id="@+id/drugi" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/tv2" /> <CheckBox android:id="@+id/cek" android:text="Obvešcaj o odgovoru" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/drugi" /> <TextView android:id="@+id/tv3" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/cek" android:text="Email: " android:visibility="gone"/> <EditText android:id="@+id/tretji" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/tv3" android:visibility="gone" /> <TextView android:id="@+id/tv4" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/tretji" android:visibility="gone" android:text="Telefon: " /> <EditText android:id="@+id/cetrti" android:layout_width="fill_parent" android:layout_height="wrap_content" android:visibility="gone" android:layout_below="@id/tv4" /> <Button android:id="@+id/gumb" android:text="Klikni" android:typeface="serif" android:textStyle="bold" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center" android:layout_below="@id/cetrti" /> </RelativeLayout> </ScrollView>

    Read the article

  • Java Builder pattern with Generic type bounds

    - by I82Much
    Hi all, I'm attempting to create a class with many parameters, using a Builder pattern rather than telescoping constructors. I'm doing this in the way described by Joshua Bloch's Effective Java, having private constructor on the enclosing class, and a public static Builder class. The Builder class ensures the object is in a consistent state before calling build(), at which point it delegates the construction of the enclosing object to the private constructor. Thus public class Foo { // Many variables private Foo(Builder b) { // Use all of b's variables to initialize self } public static final class Builder { public Builder(/* required variables */) { } public Builder var1(Var var) { // set it return this; } public Foo build() { return new Foo(this); } } } I then want to add type bounds to some of the variables, and thus need to parametrize the class definition. I want the bounds of the Foo class to be the same as that of the Builder class. public class Foo<Q extends Quantity> { private final Unit<Q> units; // Many variables private Foo(Builder<Q> b) { // Use all of b's variables to initialize self } public static final class Builder<Q extends Quantity> { private Unit<Q> units; public Builder(/* required variables */) { } public Builder units(Unit<Q> units) { this.units = units; return this; } public Foo build() { return new Foo<Q>(this); } } } This compiles fine, but the compiler is allowing me to do things I feel should be compiler errors. E.g. public static final Foo.Builder<Acceleration> x_Body_AccelField = new Foo.Builder<Acceleration>() .units(SI.METER) .build(); Here the units argument is not Unit<Acceleration> but Unit<Length>, but it is still accepted by the compiler. What am I doing wrong here? I want to ensure at compile time that the unit types match up correctly.

    Read the article

  • Dojo extending dojo.dnd.Source, move not happening. Ideas?

    - by Soulhuntre
    Hey all, Ok... I have a simple dojo page with the bare essentials. Three UL's with some LI's in them. The idea si to allow drag-n-drop among them but if any UL goes empty due to the last item being dragged out, I will put up a message to the user to gie them some instructions. In order to do that, I wanted to extend the dojo.dnd.Source dijit and add some intelligence. It seemed easy enough. To keep things simple (I am loading Dojo from a CDN) I am simply declating my extension as opposed to doing full on module load. The declaration function is here... function declare_mockupSmartDndUl(){ dojo.require("dojo.dnd.Source"); dojo.provide("mockup.SmartDndUl"); dojo.declare("mockup.SmartDndUl", dojo.dnd.Source, { markupFactory: function(params, node){ //params._skipStartup = true; return new mockup.SmartDndUl(node, params); }, onDndDrop: function(source, nodes, copy){ console.debug('onDndDrop!'); if(this == source){ // reordering items console.debug('moving items from us'); // DO SOMETHING HERE }else{ // moving items to us console.debug('moving items to us'); // DO SOMETHING HERE } console.debug('this = ' + this ); console.debug('source = ' + source ); console.debug('nodes = ' + nodes); console.debug('copy = ' + copy); return dojo.dnd.Source.prototype.onDndDrop.call(this, source, nodes, copy); } }); } I have a init function to use this to decorate the lists... dojo.addOnLoad(function(){ declare_mockupSmartDndUl(); if(dojo.byId('list1')){ //new mockup.SmartDndUl(dojo.byId('list1')); new dojo.dnd.Source(dojo.byId('list1')); } if(dojo.byId('list2')){ new mockup.SmartDndUl(dojo.byId('list2')); //new dojo.dnd.Source(dojo.byId('list2')); } if(dojo.byId('list3')){ new mockup.SmartDndUl(dojo.byId('list3')); //new dojo.dnd.Source(dojo.byId('list3')); } }); It is fine as far as it goes, you will notice I left "list1" as a standard dojo dnd source for testing. The problem is this - list1 will happily accept items from lists 2 & 3 who will move or copy as apprriate. However lists 2 & 3 refuce to accept items from list1. It is as if the DND operation is being cancelled, but the debugger does show the dojo.dnd.Source.prototype.onDndDrop.call happening, and the paramaters do look ok to me. Now, the documentation here is really weak, so the example I took some of this from may be way out of date (I am using 1.4). Can anyone fill me in on what might be the issue with my extension dijit? Thanks!

    Read the article

  • OpenCV 2.4.2 on Matlab 2012b (Windows 7)

    - by Maik Xhani
    Hello i am trying to use OpenCV 2.4.2 in Matlab 2012b. I have tried those actions: downloaded OpenCV 2.4.2 used CMake on opencv folder using Visual Studio 10 and Visual Studio 10 Win64 compiler built Debug and Release version with Visual Studio first without any other option and then with D_SCL_SECURE=1 specified for every project changed Matlab's mexopts.bat and adding new lines refering to library and include (see bottom for mexopts.bat content) with Visual Studio 10 compiler tried to compile a simple file with a OpenCV library inclusion and all goes well. try to compile something that actually uses OpenCV commands and get errors. I used openmexopencv library and when tried to compile something i get this error cv.make mex -largeArrayDims -D_SECURE_SCL=1 -Iinclude -I"C:\OpenCV\build\include" -L"C:\OpenCV\build\x64\vc10\lib" -lopencv_calib3d242 -lopencv_contrib242 -lopencv_core242 -lopencv_features2d242 -lopencv_flann242 -lopencv_gpu242 -lopencv_haartraining_engine -lopencv_highgui242 -lopencv_imgproc242 -lopencv_legacy242 -lopencv_ml242 -lopencv_nonfree242 -lopencv_objdetect242 -lopencv_photo242 -lopencv_stitching242 -lopencv_ts242 -lopencv_video242 -lopencv_videostab242 src+cv\CamShift.cpp lib\MxArray.obj -output +cv\CamShift CamShift.cpp C:\Program Files\MATLAB\R2012b\extern\include\tmwtypes.h(821) : warning C4091: 'typedef ': ignorato a sinistra di 'wchar_t' quando non si dichiara alcuna variabile c:\program files\matlab\r2012b\extern\include\matrix.h(319) : error C4430: identificatore di tipo mancante, verr… utilizzato int. Nota: default-int non Š pi— supportato in C++ the content of my mexopts.bat is @echo off rem MSVC100OPTS.BAT rem rem Compile and link options used for building MEX-files rem using the Microsoft Visual C++ compiler version 10.0 rem rem $Revision: 1.1.6.4.2.1 $ $Date: 2012/07/12 13:53:59 $ rem Copyright 2007-2009 The MathWorks, Inc. rem rem StorageVersion: 1.0 rem C++keyFileName: MSVC100OPTS.BAT rem C++keyName: Microsoft Visual C++ 2010 rem C++keyManufacturer: Microsoft rem C++keyVersion: 10.0 rem C++keyLanguage: C++ rem C++keyLinkerName: Microsoft Visual C++ 2010 rem C++keyLinkerVersion: 10.0 rem rem ******************************************************************** rem General parameters rem ******************************************************************** set MATLAB=%MATLAB% set VSINSTALLDIR=c:\Program Files (x86)\Microsoft Visual Studio 10.0 set VCINSTALLDIR=%VSINSTALLDIR%\VC set OPENCVDIR=C:\OpenCV rem In this case, LINKERDIR is being used to specify the location of the SDK set LINKERDIR=c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\ set PATH=%VCINSTALLDIR%\bin\amd64;%VCINSTALLDIR%\bin;%VCINSTALLDIR%\VCPackages;%VSINSTALLDIR%\Common7\IDE;%VSINSTALLDIR%\Common7\Tools;%LINKERDIR%\bin\x64;%LINKERDIR%\bin;%MATLAB_BIN%;%PATH% set INCLUDE=%OPENCVDIR%\build\include;%VCINSTALLDIR%\INCLUDE;%VCINSTALLDIR%\ATLMFC\INCLUDE;%LINKERDIR%\include;%INCLUDE% set LIB=%OPENCVDIR%\build\x64\vc10\lib;%VCINSTALLDIR%\LIB\amd64;%VCINSTALLDIR%\ATLMFC\LIB\amd64;%LINKERDIR%\lib\x64;%MATLAB%\extern\lib\win64;%LIB% set MW_TARGET_ARCH=win64 rem ******************************************************************** rem Compiler parameters rem ******************************************************************** set COMPILER=cl set COMPFLAGS=/c /GR /W3 /EHs /D_CRT_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_DEPRECATE /D_SECURE_SCL=0 /DMATLAB_MEX_FILE /nologo /MD set OPTIMFLAGS=/O2 /Oy- /DNDEBUG set DEBUGFLAGS=/Z7 set NAME_OBJECT=/Fo rem ******************************************************************** rem Linker parameters rem ******************************************************************** set LIBLOC=%MATLAB%\extern\lib\win64\microsoft set LINKER=link set LINKFLAGS=/dll /export:%ENTRYPOINT% /LIBPATH:"%LIBLOC%" libmx.lib libmex.lib libmat.lib /MACHINE:X64 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib opencv_calib3d242.lib opencv_contrib242.lib opencv_core242.lib opencv_features2d242.lib opencv_flann242.lib opencv_gpu242.lib opencv_haartraining_engine.lib opencv_imgproc242.lib opencv_highgui242.lib opencv_legacy242.lib opencv_ml242.lib opencv_nonfree242.lib opencv_objdetect242.lib opencv_photo242.lib opencv_stitching242.lib opencv_ts242.lib opencv_video242.lib opencv_videostab242.lib /nologo /manifest /incremental:NO /implib:"%LIB_NAME%.x" /MAP:"%OUTDIR%%MEX_NAME%%MEX_EXT%.map" set LINKOPTIMFLAGS= set LINKDEBUGFLAGS=/debug /PDB:"%OUTDIR%%MEX_NAME%%MEX_EXT%.pdb" set LINK_FILE= set LINK_LIB= set NAME_OUTPUT=/out:"%OUTDIR%%MEX_NAME%%MEX_EXT%" set RSP_FILE_INDICATOR=@ rem ******************************************************************** rem Resource compiler parameters rem ******************************************************************** set RC_COMPILER=rc /fo "%OUTDIR%mexversion.res" set RC_LINKER= set POSTLINK_CMDS=del "%LIB_NAME%.x" "%LIB_NAME%.exp" set POSTLINK_CMDS1=mt -outputresource:"%OUTDIR%%MEX_NAME%%MEX_EXT%;2" -manifest "%OUTDIR%%MEX_NAME%%MEX_EXT%.manifest" set POSTLINK_CMDS2=del "%OUTDIR%%MEX_NAME%%MEX_EXT%.manifest" set POSTLINK_CMDS3=del "%OUTDIR%%MEX_NAME%%MEX_EXT%.map"

    Read the article

  • Problem with bootstrap loader and kernel

    - by dboarman-FissureStudios
    We are working on a project to learn how to write a kernel and learn the ins and outs. We have a bootstrap loader written and it appears to work. However we are having a problem with the kernel loading. I'll start with the first part: bootloader.asm: [BITS 16] [ORG 0x0000] ; ; all the stuff in between ; ; the bottom of the bootstrap loader datasector dw 0x0000 cluster dw 0x0000 ImageName db "KERNEL SYS" msgLoading db 0x0D, 0x0A, "Loading Kernel Shell", 0x0D, 0x0A, 0x00 msgCRLF db 0x0D, 0x0A, 0x00 msgProgress db ".", 0x00 msgFailure db 0x0D, 0x0A, "ERROR : Press key to reboot", 0x00 TIMES 510-($-$$) DB 0 DW 0xAA55 ;************************************************************************* The bootloader.asm is too long for the editor without causing it to chug and choke. In addition, the bootloader and kernel do work within bochs as we do get the message "Welcome to our OS". Anyway, the following is what we have for a kernel at this point. kernel.asm: [BITS 16] [ORG 0x0000] [SEGMENT .text] ; code segment mov ax, 0x0100 ; location where kernel is loaded mov ds, ax mov es, ax cli mov ss, ax ; stack segment mov sp, 0xFFFF ; stack pointer at 64k limit sti mov si, strWelcomeMsg ; load message call _disp_str mov ah, 0x00 int 0x16 ; interrupt: await keypress int 0x19 ; interrupt: reboot _disp_str: lodsb ; load next character or al, al ; test for NUL character jz .DONE mov ah, 0x0E ; BIOS teletype mov bh, 0x00 ; display page 0 mov bl, 0x07 ; text attribute int 0x10 ; interrupt: invoke BIOS jmp _disp_str .DONE: ret [SEGMENT .data] ; initialized data segment strWelcomeMsg db "Welcome to our OS", 0x00 [SEGMENT .bss] ; uninitialized data segment Using nasm 2.06rc2 I compile as such: nasm bootloader.asm -o bootloader.bin -f bin nasm kernel.asm -o kernel.sys -f bin We write bootloader.bin to the floppy as such: dd if=bootloader.bin bs=512 count=1 of/dev/fd0 We write kernel.sys to the floppy as such: cp kernel.sys /dev/fd0 As I stated, this works in bochs. But booting from the floppy we get output like so: Loading Kernel Shell ........... ERROR : Press key to reboot Other specifics: OpenSUSE 11.2, GNOME desktop, AMD x64 Any other information I may have missed, feel free to ask. I tried to get everything in here that would be needed. If I need to, I can find a way to get the entire bootloader.asm posted somewhere. We are not really interested in using GRUB either for several reasons. This could change, but we want to see this boot successful before we really consider GRUB.

    Read the article

  • How to change image through click - javascript

    - by Elmir Kouliev
    I have a toolbar that has 5 table cells. The first cell looks clear, and the other 4 have a shade over them. I want to make it so that clicking on the table cell will also change the image so that the shade will also change in respect to the current table cell that is selected. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <title>X?B?RL?R V? HADIS?L?R</title> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link rel="stylesheet" href="N&SAz.css" /> <link rel="shortcut icon" href="../../Images/favicon.ico" /> <script type="text/javascript"> var switchTo5x = true; </script> <script type="text/javascript" src="http://w.sharethis.com/button/buttons.js"></script> <script type="text/javascript"> stLight.options({ publisher: "581d0c30-ee9d-4c94-9b6f-a55e8ae3f4ae" }); </script> <script src="../../jquery-1.7.2.min.js" type="text/javascript"> </script> <script type="text/javascript"> $(document).ready(function () { $(".fade").css("display", "none"); $(".fade").fadeIn(20); $("a.transition").click(function (event) { event.preventDefault(); linkLocation = this.href; $("body").fadeOut(500, redirectPage); }); function redirectPage() { window.location = linkLocation; } }); $(document).ready(function () { $('.preview').hide(); $('#link_1').click(function () { $('#latest_story_preview1').hide(); $('#latest_story_preview2').hide(); $('#latest_story_preview3').hide(); $('#latest_story_preview4').hide(); $('#latest_story_main').fadeIn(800); }); $('#link_2').click(function () { $('#latest_story_main').hide(); $('#latest_story_preview2').hide(); $('#latest_story_preview3').hide(); $('#latest_story_preview4').hide(); $('#latest_story_preview1').fadeIn(800); }); $('#link_3').click(function () { $('#latest_story_main').hide(); $('#latest_story_preview1').hide(); $('#latest_story_preview3').hide(); $('#latest_story_preview4').hide(); $('#latest_story_preview2').fadeIn(800); }); $('#link_4').click(function () { $('#latest_story_main').hide(); $('#latest_story_preview1').hide(); $('#latest_story_preview2').hide(); $('#latest_story_preview4').hide(); $('#latest_story_preview3').fadeIn(800); }); $('#link_5').click(function () { $('#latest_story_main').hide(); $('#latest_story_preview1').hide(); $('#latest_story_preview2').hide(); $('#latest_story_preview3').hide(); $('#latest_story_preview4').fadeIn(800); }); $(".fade").css("display", "none"); $(".fade").fadeIn(1200); $("a.transition").click(function (event) { event.preventDefault(); linkLocation = this.href; $("body").fadeOut(500, redirectPage); }); }); </script> </head> <body id="body" style="background-color:#FFF;" onload="document"> <div style="margin:0px auto;width:1000px;" id="all_content"> <div id="top_content" style="background-color:transparent;"> <ul id="translation_list"> <li> <a href=""> AZ </a> </li> <li> <a href="#"> RUS </a> </li> <li> <a href="#"> ENG </a> </li> </ul> <div id="share_buttons"> <span class='st_facebook' displayText='' title="Facebook"></span> <span class='st_twitter' displayText='' title="Twitter"></span> <span class='st_linkedin' displayText='' title="Linkedin"></span> <span class='st_googleplus' displayText='' title="Google +"></span> <span class='st_email' displayText='' title="Email"></span> </div> <img src="../../Images/RasulGuliyev.png" width="330" height="80" id="top_logo"> <br /> <br /> <div class="fade" id="navigation"> <ul> <font face="Verdana, Geneva, sans-serif"> <li> <a href="../../index.html"> ANA S?HIF? </a> </li> <li> <a href="../biographyAZ.html"> BIOQRAFIYA </a> </li> <li style="background-color:#9C1A35;"> <a href="#"> X?B?RL?R V? HADIS?L?R </a> </li> <li> <a> PROQRAM </a> </li> <li> <a> SEÇICIL?R </a> </li> <li> <a> ?LAQ?L?R</a> </li> </font> </ul> </div> <font face="Tahoma, Geneva, sans-serif"> <br /> <div id="navigation2"> <ul> <a> <li><i> HADIS?L?R </i></li> </a> <a> <li><i>VIDEOLAR</i> </li> </a> </ul> </div> <div id="news_section" style="background-color:#FFF;"> <h3 style="font-weight:100; font-size:22px; font-style:normal; color:#7C7C7C;">Son X?b?rl?r</h3> <div class="fade" id="Latest-Stories"> <table id="stories-preview" width="330" height="598" border="0" cellpadding="0" cellspacing="0"> <tr> <td> <a id="link_1" href="#"><img src="../../Images/N&EImages/images/Article-Nav-Bar1_01.gif" width="330" height="114" alt=""></a> </td> </tr> <tr> <td> <a id="link_2" href="#"> <img src="../../Images/N&EImages/images/Article-Nav-Bar1_02.gif" width="330" height="109" alt=""> </a> </td> </tr> <tr> <td> <a id="link_3" href="#"> <img src="../../Images/N&EImages/images/Article-Nav-Bar1_03.gif" width="330" height="132" alt=""></a> </td> </tr> <tr> <td> <a id="link_4" href="#"><img src="../../Images/N&EImages/images/Article-Nav-Bar1_04.gif" width="330" height="124" alt=""></a> </td> </tr> <tr> <td> <a id="link_5" href="#"><img src="../../Images/N&EImages/images/Article-Nav-Bar1_05.gif" width="330" height="119" alt=""></a> </td> </tr> </table> <div class="fade" id="latest_story_main"> <!--START--> <img src="../../Images/N&EImages/GuliyevFace.jpeg" style="padding:4px; margin-top:6px; border-style:groove; border-width:thin; margin-left:90px;" /> <a href="#"> <h2 style="font-weight:100; font-style:normal;"> "Bizim V?zif?miz Az?rbaycan Xalqinin T?zyiq? M?ruz Qalmamasini T?min Etm?kdir" </h2> </a> <h5 style="font-weight:100; font-size:12px; color:#888; opacity:.9;"> <img src="../../Images/ClockImage.png" />IYUN 19, 2012 BY RASUL GULIYEV - R?SUL QULIYEV</h5> <p style="font-size:14px; font-style:normal;">ACP-nin v? Müqavim?t H?r?katinin lideri, eks-spiker R?sul Quliyev "Yeni Müsavat"a müsahib? verib. O, son vaxtlar ACP-d? bas ver?n kadr d?yisiklikl?ri, bar?sind? dolasan söz-söhb?tl?r v? dig?r m?s?l?l?r? aydinliq g?tirib. Müsahib?ni t?qdim edirik. – Az?rbaycanda siyasi günd?mi ?hat? ed?n m?s?l?l?rd?n biri d? Sülh?ddin ?kb?rin ACP-y? s?dr g?tirilm?sidir. Ideya v? t?s?bbüs kimin idi? – ?vv?ll?r d? qeyd <a href="#"> [...]</a> </p> <!--FIRST STORY END --> </div> <div class="preview" id="latest_story_preview1"> <!--START--> <img src="../../Images/N&EImages/GuliyevFace2.jpeg" style="padding:4px; margin-top:6px; border-style:groove; border-width:thin; margin-left:90px;" /> <a href="#"> <h2 style="font-weight:100; font-style:normal;"> "S?xsiyy?ti Alçaldilan Insanlarin Qisasi Amansiz Olur" </h2></a> <h5 style="font-weight:100; font-size:12px; color:#888; opacity:.9;"> <img src="../../Images/ClockImage.png" />IYUN 12, 2012 BY RASUL GULIYEV - R?SUL QULIYEV</h5> <p style="font-size:14px; font-style:normal;">R?sul Quliyev: "Az?rbaycanda müxalif?tin görün?n f?aliyy?ti ?halinin hökum?td?n naraziliq potensialini ifad? etmir" Eks-spiker Avropa görüsl?rinin yekunlarini s?rh etdi ACP lideri R?sul Quliyevin Avropa görüsl?ri basa çatib. S?f?rin yekunlari bar?d? R?sul Quliyev eksklüziv olaraq "Yeni Müsavat"a açiqlama verib. Norveçd? keçiril?n görüsl?rd? Açiq C?miyy?t v? Liberal Demokrat partiyalarinin s?drl?ri Sülh?ddin ?kb?r, Fuad ?liyev v? Müqavim?t H?r?kati Avropa <a href="#"> [...]</a> </p> <!--SECOND STORY END --> </div> <div class="preview" id="latest_story_preview2"> <!--START--> <img src="../../Images/N&EImages/GuliyevFace3.jpeg" style="padding:4px; margin-top:6px; border-style:groove; border-width:thin; margin-left:90px;" /> <a href="#"> <h2 style="font-weight:100; font-style:normal;"> R?sul Quliyevin Iyunun 4, 2012-ci ild? Bryusseld?ki Görüsl?rl? ?laq?dar Çixisi </h2></a> <h5 style="font-weight:100; font-size:12px; color:#888; opacity:.9;"> <img src="../../Images/ClockImage.png" />IYUN 4, 2012 BY RASUL GULIYEV - R?SUL QULIYEV</h5> <p style="font-size:14px; font-style:normal;">Brüssel görüsl?ri – Az?rbaycanda xalqin malini ogurlayan korrupsioner Höküm?t liderl?rinin xarici banklarda olan qara pullari v? ?mlaklarinin dondurulmasina çox qalmayib. Camaatin hüquqlarini pozan polis, prokuratura v? m?hk?m? isçil?rin? v? onlarin r?hb?rl?rin? viza m?hdudiyy?tl?ri qoymaqda reallasacaq. R?sul Quliyevin Iyunun 4, 2012-ci ild? Bryusseld?ki Görüsl?rl? ?laq?dar Çixisi Rasul Guliyev's Speech on June 4, 2012 about Brussels Meetings <a href="#">[...]</a> </p> <!--THIRD STORY END --> </div> <div class="preview" id="latest_story_preview3"> <!--START--> <img src="../../Images/N&EImages/GuliyevGroup1.jpeg" style="padding:4px; margin-top:6px; border-style:groove; border-width:thin; margin-left:90px;" /> <a href="#"> <h2 style="font-weight:100; font-style:normal;"> R?sul Quliyevin Avropa Parlamentind? v? Hakimiyy?t Qurumlarinda Görüsl?ri Baslamisdir </h2></a> <h5 style="font-weight:100; font-size:12px; color:#888; opacity:.9;"> <img src="../../Images/ClockImage.png" />MAY 31, 2012 BY RASUL GULIYEV - R?SUL QULIYEV</h5> <p style="font-size:14px; font-style:normal;">Aciq C?miyy?t Partiyasinin lideri, eks-spiker R?sul Quliyev Avropa Parlamentind? görüsl?rini davam etdirir. Bu haqda "Yeni Müsavat"a R.Quliyev özü m?lumat verib. O bildirib ki, görüsl?rd? Liberal Demokrat Partiyasinin s?dri Fuad ?liyev v? R.Quliyevin Skandinaviya ölk?l?ri üzr? müsaviri Rauf K?rimov da istirak edirl?r. Eks-spiker deyib ki, bu görüsl?r 2013-cü ild? keçiril?c?k prezident seçkil?rind? saxtalasdirmanin qarsisini almaq planinin [...]</p> <!--FOURTH STORY END --> </div> <div class="preview" id="latest_story_preview4"> <!--START--> <img src="../../Images/N&EImages/GuliyevGroup2.jpeg" style="padding:4px; margin-top:6px; border-style:groove; border-width:thin; margin-left:90px;" /> <a href="#"> <h2 style="font-weight:100; font-style:normal;"> Norveçin Oslo S?h?rind? Parlament Üzvl?ri il? v? Xarici Isl?r Nazirliyind? Görüsl?r </h2></a> <h5 style="font-weight:100; font-size:12px; color:#888; opacity:.9;"> <img src="../../Images/ClockImage.png" />MAY 30, 2012 BY RASUL GULIYEV - R?SUL QULIYEV</h5> <p style="font-size:14px; font-style:normal;">R?sul Quliyev Norveçin Oslo s?h?rind? Parlament üzvl?ri v? Xarici isl?r nazirliyind? görüsl?r keçirmisdir. Bu görüsl?rd? Az?rbaycandan Liberal Demokrat Partiyasinin s?dri Fuad ?liyev, Avro-Atlantik Surasinin s?dri Sülh?ddin ?kb?r v? Milli Müqavim?t H?r?katinin Skandinaviya ölk?l?ri üzr? nümay?nd?si Rauf K?rimov istirak etmisdir. Siyasil?r ilk ?vv?l mayin 22-d? Norveç Parlamentinin Avropa Surasinda t?msil ed?n nümay?nd? hey?tinin üzvül?ri Karin S. [...]</a> </p> <!--FIFTH STORY END --> </div> <hr /> </div> <!--LATEST STORIES --> <div class="fade" id="article-section"> <h3 style="font-weight:100; font-size:22px; font-style:normal; color:#7C7C7C;">Çecin X?b?rl?r</h3> <div class="older-article"> <img src="../../Images/N&EImages/GuliyevGroup2.jpeg" style="padding:4px; margin-top:6px; border-style:groove; border-width:thin; margin-left:90px;" /> <a href="#"> <h2 style="font-weight:100; font-style:normal;"> Norveçin Oslo S?h?rind? Parlament Üzvl?ri il? v? Xarici Isl?r Nazirliyind? Görüsl?r </h2></a> <h5 style="font-weight:100; font-size:12px; color:#888; opacity:.9;"> <img src="../../Images/ClockImage.png" />MAY 30, 2012 BY RASUL GULIYEV - R?SUL QULIYEV</h5> <p style="font-size:14px; font-style:normal;">R?sul Quliyev Norveçin Oslo s?h?rind? Parlament üzvl?ri v? Xarici isl?r nazirliyind? görüsl?r keçirmisdir. Bu görüsl?rd? Az?rbaycandan Liberal Demokrat Partiyasinin s?dri Fuad ?liyev, Avro-Atlantik Surasinin s?dri Sülh?ddin ?kb?r v? Milli Müqavim?t H?r?katinin Skandinaviya ölk?l?ri üzr? nümay?nd?si Rauf K?rimov istirak etmisdir. Siyasil?r ilk ?vv?l mayin 22-d? Norveç Parlamentinin Avropa Surasinda t?msil ed?n nümay?nd? hey?tinin üzvül?ri Karin S. [...]</a> </p> </div> <hr /> </div> <!--NEWS SECTION--> </font> <h3 class="fade" id="footer">Rasul Guliyev 2012</h3> </div> </body> </head> </html>

    Read the article

  • No value given for one or more required parameters in connection initialisation

    - by DarkJaff
    Hi everyone, I have an C# form application that use an access database. This application works perfectly in debug and release. It works on all version of Windows. But it crash on one computer with Windows 7. The message I got is: System.Data.OleDb.OleDbException: No value given for one or more required parameters. The function that is supposely not working is this: public void InitConnection(string strFile) { string strConnection = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};User Id=admin;Password=;", strFile); m_conn = new OleDbConnection(strConnection); try { //On vérifie si la connexion n'est pas ouverte if (m_conn.State != ConnectionState.Open) { m_conn.Open(); m_VCoeffModele = GetModeleCoeff(); } } catch (Exception err) { throw err; } } I think it's something related to the connection string but why only on that computer. Thanks for your help! DarkJaff EDIT Here is the complete error message: See the end of this message for details on invoking just-in-time (JIT) debugging instead of this dialog box. ***** Exception Text ******* System.Data.OleDb.OleDbException: No value given for one or more required parameters. at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OleDb.OleDbCommand.ExecuteReader() at DatabaseLayer.DatabaseFacade.GetModeleCoeff() at DatabaseLayer.DatabaseFacade.InitConnection(String strFile) at CalculatriceCHW.ListeMesure.OuvrirFichier(String strFichier) at CalculatriceCHW.ListeMesure.nouveauFichierMenu_Click(Object sender, EventArgs e) at System.Windows.Forms.ToolStripItem.RaiseEvent(Object key, EventArgs e) at System.Windows.Forms.ToolStripMenuItem.OnClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleMouseUp(MouseEventArgs e) at System.Windows.Forms.ToolStripItem.FireEventInteractive(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStripItem.FireEvent(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStrip.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.ToolStripDropDown.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ToolStrip.WndProc(Message& m) at System.Windows.Forms.ToolStripDropDown.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • How can I make my Perl Jabber bot an event-driven program?

    - by TheGNUGuy
    I'm trying to make a Jabber bot and I am having trouble keeping it running while waiting for messages. How do I get my script to continuously run? I have tried calling a subroutine that has a while loop that I, in theory, have set up to check for any messages and react accordingly but my script isn't behaving that way. Here is my source: http://pastebin.com/03Habbvh # set jabber bot callbacks $jabberBot-SetMessageCallBacks(chat=\&chat); $jabberBot-SetPresenceCallBacks(available=\&welcome,unavailable=\&killBot); $jabberBot-SetCallBacks(receive=\&prnt,iq=\&gotIQ); $jabberBot-PresenceSend(type="available"); $jabberBot-Process(1); sub welcome { print "Welcome!\n"; $jabberBot-MessageSend(to=$jbrBoss-GetJID(),subject="",body="Hello There!",type="chat",priority=10); &keepItGoing; } sub prnt { print $_[1]."\n"; } #$jabberBot-MessageSend(to=$jbrBoss-GetJID(),subject="",body="Hello There! Global...",type="chat",priority=10); #$jabberBot-Process(5); #&keepItGoing; sub chat { my ($sessionID,$msg) = @_; $dump-pl2xml($msg); if($msg-GetType() ne 'get' && $msg-GetType() ne 'set' && $msg-GetType() ne '') { my $jbrCmd = &trimSpaces($msg-GetBody()); my $dbQry = $dbh-prepare("SELECT command,acknowledgement FROM commands WHERE message = '".lc($jbrCmd)."'"); $dbQry-execute(); if($dbQry-rows() 0 && $jbrCmd !~ /^insert/si) { my $ref = $dbQry-fetchrow_hashref(); $dbQry-finish(); $jabberBot-MessageSend(to=$msg-GetFrom(),subject="",body=$ref-{'acknowledgement'},type="chat",priority=10); eval $ref-{'command'}; &keepItGoing; } else { $jabberBot-MessageSend(to=$msg-GetFrom(),subject="",body="I didn't understand you!",type="chat",priority=10); $dbQry-finish(); &keepItGoing; } } } sub gotIQ { print "iq\n"; } sub trimSpaces { my $string = $_[0]; $string =~ s/^\s+//; #remove leading spaces $string =~ s/\s+$//; #remove trailing spaces return $string; } sub keepItGoing { print "keepItGoing!\n"; my $proc = $jabberBot-Process(1); while(defined($proc) && $proc != 1) { $proc = $jabberBot-Process(1); } } sub killBot { print "killing\n"; $jabberBot-MessageSend(to=$_[0]-GetFrom(),subject="",body="Logging Out!",type="chat",priority=10); $jabberBot-Process(1); $jabberBot-Disconnect(); exit; }

    Read the article

  • How to make multiple queries with PHP prepared statements (Commands out of sync error)

    - by Tirithen
    I'm trying to run three MySQL queries from a PHP script, the first one works fine, but on the second one I get the "Commands out of sync; you can’t run this command now" error. I have managed to understand that I need to "empty" the resultset before preparing a new query but I can't seem to understand how. I thought that $statement-close; would do that for me. Here is the relevant part of the code: <?php $statement = $db_conn->prepare("CALL getSketches(?,?)"); // Prepare SQL routine to check if user is accepted $statement->bind_param("is", $user_id, $loaded_sketches); // Bind variables to send $statement->execute(); // Execute the query $statement->bind_result( // Set return varables $id, $name, $description, $visibility, $createdby_id, $createdby_name, $createdon, $permission ); $new_sketches_id = array(); while($statement->fetch()) { $result['newSketches'][$id] = array( "name" => $name, "description" => $description, "visibility" => $visibility, "createdById" => $createdby_id, "createdByName" => $createdby_name, "createdOn" => $createdon, "permission" => $permission ); $new_sketches_id[] = $id; } $statement->close; // Close satement $new_sketches_ids = implode(",", $new_sketches_id); // Get the new sketches elements $statement = $db_conn->prepare("CALL getElements(?,'',?,'00000000000000')"); // Prepare SQL routine to check if user is accepted // The script crashes here with $db_conn->error // "Commands out of sync; you can't run this command now" $statement->bind_param("si", $new_sketches_ids, $user_id); // Bind variables to send $statement->execute(); // Execute the query $statement->bind_result( // Set return varables $id, $user_id, $type, $attribute_d, $attribute_stroke, $attribute_strokeWidth, $sketch_id, $createdon ); while($statement->fetch()) { $result['newSketches'][$sketch_id]['newElements']["u".$user_id."e".$id] = array( "type" => $type, "d" => $attribute_d, "stroke" => $attribute_stroke, "strokeWidth" => $attribute_strokeWidth, ); } $statement->close; // Close satement ?> How can I make the second query without closing and reopening the entire database connection?

    Read the article

  • Magento: add product twice to cart, with different attributes!

    - by Peter
    Hello all, i have been working with this for a whole day but i cannot find any solution: I have a product (lenses), which has identical attributes, but user can choose one attribute set for one eye and another attribute set for another. On the frontend i got it ok, see it here: http://connecta.si/clarus/index.php/featured/acuvue-oasys-for-astigmatism.html So the user can select attributes for left or right eye, but it is the same product. I build a function, wich should take a product in a cart (before save), add other set of attributes, so there should be two products in the cart. What happens is there are two products, but with the same set of attributes??? Here is the snippet of the function: $req = Mage::app()-getRequest(); $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 1; $request[’options’][3] = 5; $request[’options’][2] = 3; $reqo = new Varien_Object($request); $newitem = $quote-addProduct($founditem-getProduct(), $reqo); //add another one ------------------------------------------ $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 2; $request[’options’][3] = 6; $request[’options’][2] = 4; $reqo = new Varien_Object($request); $newitem = $quote-addProduct($founditem-getProduct(), $reqo); Or another test, with some other functions (again, product added, with 2 quantity , but same attributes...): $req = Mage::app()-getRequest(); $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 2; $request[’options’][3] = 6; $request[’options’][2] = 4; $product = $founditem-getProduct(); $cart = Mage::getSingleton(’checkout/cart’); //delete all first… $cart-getItems()-clear()-save(); $reqo = new Varien_Object($request); $cart-addProduct($founditem-getProduct(), $reqo); $cart-getItems()-save(); $request[’options’][1] = 1; $request[’options’][3] = 5; $request[’options’][2] = 3; $reqo = new Varien_Object($request); $cart-addProduct($founditem-getProduct(), $reqo); $cart-getItems()-save(); i really dont know what more to do, please any advice, this is my first module in magento… thank you, Peter

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41  | Next Page >