Search Results

Search found 7722 results on 309 pages for 'pitfalls to avoid'.

Page 152/309 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • What are the common techniques to handle user-generated HTML modified differently by different browsers?

    - by Jakie
    I am developing a website updater. The front end uses HTML, CSS and JavaScript, and the backend uses Python. The way it works is that <p/>, <b/> and some other HTML elements can be updated by the user. To enable this, I load the webpage and, with JQuery, convert all those elements to <textarea/> elements. Once they the content of the text area is changed, I apply the change to the original elements and send it to a Python script to store the new content. The problem is that I'm finding that different browsers change the original HTML. How do you get around this issue? What Python libraries do you use? What techniques or application designs do you use to avoid or overcome this issue? The problems I found are: IE removes the quotes around class and id attributes. For example, <img class='abc'/> becomes <img class=abc/>. Firefox removes the backslash from the line breaks: <br \> becomes <br>. Some websites have very specific display technicalities, so an insertion of a simple "\n"(which IE does) can affect the display of a website. Example: changing <img class='headingpic' /><div id="maincontent"> to <img class='headingpic'/>\n <div id="maincontent"> inserts a vertical gap in IE. The things I have unsuccessfully tried to overcome these issues: Using either JQuery or Python to remove all >\n< occurences, <br> etc. But this fails because I get different patterns in IE, sometimes a ·\n, sometimes a \n···. In a Python, parse the new HTML, extract the new text/content, insert it into the old HTML so the elements and format never change, just the content. This is very difficult and seems to be overkill.

    Read the article

  • Is there any kind of established architecture for browser based MMO games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that?

    Read the article

  • Quoted on MVA Voices

    A couple of weeks ago, I received an email from the Dean of Microsoft Virtual Academy (MVA) asking for permission to quote a statement I made during a jump start. Following is an excerpt from that request: "Dear Jochen, I would like to thank you for providing insight as to how the Advanced HTML5 Jump Start helped you improve your skills.  I mentioned this to the leadership team at MVA, and they were pleased to hear this so much that they would like your permission to use a quote from your email to me on the MVA website." Of course! I really enjoy those free MVA jump starts - live and later the recordings. Actually, I prefer the live ones because you really have a chance to communicate with the MVA studio team and the experts in the chat. Luckily, the live stream is provided in two quality levels and with the remote situation of Mauritius, I always have to switch to 'Standard Quality' to avoid too much buffering and to enjoy a smooth experience. Later on, the recordings are great for rehearsal and repetition of the material. You can download and watch them offline while commuting, or what I'm going to do in the future - to use them as material for a study group within the Mauritius Software Craftsmanship Community (MSCC). For sure, this is going to be a lot of fun, and I'm looking forward to work with other Windows-oriented software craftsmen in order to 'push' them towards Microsoft certifications. By chance, I discovered today that my quote has been published in the MVA Voices section: Click to enlarge: Screenshot of Microsoft Virtual Academy web site taken on 04.07.2013 Thank you very much, MVA - this made my day and I'm very happy to be quoted.

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • Nullable types and ?? operator C# [en-US]

    - by ruimachado
    Nullable types vs Non-nullable types   While developing our C# projects its frequent the null comparison operation to avoid null exceptions. This simple operation is mainly coded using the "var x = null" code example inside an if clause. However not all types of variables are nullable, which means that setting a variable to null is not allowed in every cases, it depends on what kind of type are you defining. But what if there was an extension to your non-nullable type that would convert your variable types to nullable? This extension really exists. As I said before in C# you have nullable types which represent all the values of an underlying type, and an additional null value and can be declared easily using "T?", where T is the type of the variable and for example the normal int type cannot be null, so its a non-nullable type, however if you define a "int?" your variable can be null, what you do is convert a non-nullable type to a nullable type. Example: int x=null;     Not allowed     int? x=null;   Allowed     While using nullable types you can check if a variable is null the same way you do it with nullable types:     But what about setting a default value when a certain variable is null?   In this cases the c# .net framework let you set a default value when you try to assign a nullable type to a non-nullable type, using the ?? operator. If you don't use this operator you can still catch the InvalidOperationException which is throw in this cases. For example  without the ?? operator :     Using the ?? operator your code becomes cleaner and more easy to read and you get a bonus, you can set a default value for multiple variables using the ?? in a chain set.     That’s it,   Thanks, Rui Machado rpmachado.wordpress.com

    Read the article

  • How can one-handed work in Ubuntu be eased?

    - by N.N.
    My right hand is temporarily immobilized and I would like to do some minor general work on my computer. Mostly web browsing, mailing and file and directory browsing and editing. For this I currently use Firefox, Thunderbird, Nautilus and the GNOME terminal (I have already asked a specific question about Emacs). Are there ways to ease such, or any other general, one-handed work in Ubuntu? I have found http://stackoverflow.com/questions/2391805/how-can-i-remain-productive-with-one-hand-completely-immobilized but that is not exactly what I am asking for. I want to ease whatever little time spent one-handed in Ubuntu and this is also interesting for situations where there is no injury involved, such as when one hand is occupied. I do realize I should avoid unnecessary strain. The main thing that is much slower one-handed is writing. Since I am only temporarily immobilized it seems to make no sense learn a new keyboard layout. I would be surprised if I managed to learn and become more effective with a new keyboard layout (than one-handed QWERTY) before I can use my other hand again. What I have already found: Sticky keys for making it easier to enter keyboard commands. When writing one-handed there are more cases of where it is useful to paste in phrases rather than to reenter them. It is easier to use Super+S rather than CtrlAlt+arrow keys to switch work space.

    Read the article

  • What is the most efficient way to add and remove Slick2D sprites?

    - by kirchhoff
    I'm making a game in Java with Slick2D and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advise: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } bullet++; } I created the method resetLocation in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • C# XNA 2D Multiple boxes collision detection and movement

    - by zini
    Hi, I've been making simple game where you shoot boxes that are coming towards you. All game objects are simple rectangles. Now I have problem with collision detection; how to check where the collision comes so I can change the coordinates right? I have this kind of situation: http://imgur.com/8yjfW Imagine that all of those blocks are moving towards you (green box). If those orange boxes collide with each other, they should "avoid" themselves and not go through each other. I have class Enemy which has properties x, y and such. Now I'm doing the collision like this: // os.Count is an amount of other enemies colliding with this enemy if (os.Count == 0) { // If enemy doesn't collide with other enemy lasty = y; lastx = x; slope = (x - player.x) / (y - player.y); x += slope * l; // l is "movement speed" of enemy (float) if (y > player.y) { y = lasty; } else if (y < player.y) { y += l; } } else { foreach(Enemy b in os) { if (b.y > this.y) { // If some colliding enemy is closer player than this enemy, that closer one will be moved towards the player b.lasty = b.y; if (!BiggestY(os)) { // BiggestY returns true if this enemy has the biggest Y b.y += b.l; } b.x = b.lastx; } } } But this is very, very bad way to do this. I know it, but I just can't figure out other way. And as a matter in fact, this method doesn't even work pretty good; if multiple enemies are colliding same enemy they go through each other. I explained this pretty badly, but I hope that you understand this. And to sum up, as I said: How to check where the collision comes so I can change the coordinates right?

    Read the article

  • Good sysadmin practise?

    - by Randomthrowaway
    Throwaway account here. Recently our sysadmin sent us the following email (I removed the names): Hi, I had a situation yesterday (not mentioning names) when I had to perform a three way md5 checksum verification over the phone, more than once. If we can stick to the same standards then this will save any confusion if you are ever asked to repeat something over the phone or in the office for clarification. This is of particular importance when trying to speak or say this over the phone … m4f7s29gsd32156ffsdf … that’s really difficult to get right on a bad line. The rule is very simple: 1) Speak in blocks of 4 characters and continue until the end. The recipient can read back or ask for verification on one of the blocks. 2) Use the same language! http://en.wikipedia.org/wiki/Phoenetic_alphabet#NATO Myself, xxx and a few others I know all speak the NATO phonetic alphabet (aka police speak) and this makes it so much easier and saves so much time. If you want to learn quickly then all you really need is A to F and 0 to 9. 0 to 9 is really easy, A to F is only 6 characters to learn. Could you tell me if forcing the developers to learn NATO alphabet is a good practise, or if there are ways (and which ways) to avoid being in such a situation?

    Read the article

  • It is inconsiderate to place editor settings inside code files?

    - by Carlos Campderrós
    I know this is kind of a subjective question, but I'm curious if there's any good reason to place (or not place) editor settings inside code files. I'm thinking in vi modelines, but it is possible that this applies to other editors. In short, a vi modeline is a line inside a file that tells vi how to behave (indent with spaces or tabs, set tabwidth to X, autoindent by default or not, ...) that is placed inside a comment, so it won't affect the program/compiler when running. In a .c file it could be similar to // vim: noai:ts=4:sw=4 On one hand, I think this shouldn't be inside the file, as it is an editor setting and so belongs to an editor configuration file or property. On the other hand, for projects involving developers outside one company (that are not imposed an editor/settings) or collaborators on github/bitbucket/... it is an easy way to avoid breaking the code style (tabs vs spaces for example), but only for the ones that use that editor though. I cannot see any powerful enough reason to decide for or against this practice, so I am in doubt of what to do.

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • If I want to dual-boot Ubuntu with another OS, what partitioning method should I use?

    - by Jay
    I have Ubuntu running as a vm in VirtualBox at the moment, but in the future, if I want to dual-boot it with Windows or another OS installed on my hard-drive, what partitioning method should I use to make room for it? 1)Manually partition my hard drive via disk management in Windows (or the equivalent in another OS), making appropriate room for the main partition upon which Ubuntu will be installed and swap space; 2)Partition via the Ubuntu installer options; 3)Use gparted or another free tool like it. I am uncertain as to why I would want to use one over the other. Lastly, am I correct to think that it would be the acme of foolishness to try to partition drives within a virtual machine (since that partitioning would be inherently limited to the limitations set upon it by the virtualization software, e.g., VirtualBox)? Thanks! P.S. Oh, and I am also planning on not modifying the MBR of Windows if I ever do dual-boot with Ubuntu, using instead a piece of free software (like easyBCD or something) to avoid the headaches of Grub being overwritten by a Windows update.

    Read the article

  • Quadcopters Play Catch [Video]

    - by Jason Fitzpatrick
    Working like a group of hive-minded bees, these quadcopters come off as almost playful with their ball throwing antics. Courtesy of the folks at the Swiss Federal Institute of Technology in Zurich’s Institute for Dynamic Systems and Control, we’re treated to a video of three quadcopters playing catch in the research facility’s Flying Machine Area. They explain the processes demonstrated in the video: This video shows three quadrocopters cooperatively tossing and catching a ball with the aid of an elastic net. To toss the ball, the quadrocopters accelerate rapidly outward to stretch the net tight between them and launch the ball up. Notice in the video that the quadrocopters are then pulled forcefully inward by the tension in the elastic net, and must rapidly stabilize in order to avoid a collision. Once recovered, the quadrotors cooperatively position the net below the ball in order to catch it. Because they are coupled to each other by the net, the quadrocopters experience complex forces that push the vehicles to the limits of their dynamic capabilities. To exploit the full potential of the vehicles under these circumstances requires several novel algorithms, including: HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Problems to boot, Ubuntu entry does not work anymore

    - by user104108
    A few months I decided to install Ubuntu 12.04 on my PC alongside with my Windows 7 partition. In order to do that and avoid any mistake, I followed these steps: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/2/ Everything was going well until I decided to update to the 12.10 realese. I don't know what happened, but after I updated my Ubuntu, it stoped working, it didn't even launched, when I turned on my pc and choose to run "Ubuntu 12.04" on the Grub Screen, a weird messaged appeared. Well, so I decided to install the Ubuntu 12.10 and forget about the 12.04 partition, no problem. I erased the partitions used for the Ubuntu 12.04 with EaseUS partition Manager. However, when I start my PC, there is still the option of "Ubuntu 12.04" to chose, is that bad? And what about now, can I use the Windows Installer of Ubuntu ( http://www.ubuntu.com/download/help/install-ubuntu-with-windows ) to install the Ubuntu 12.10 ? What should I do to have Ubuntu 12.10 and Windows 7 in dual boot again? Thanks; Thales.

    Read the article

  • Uralelektrostroy Improves Turnaround Times for Engineering and Construction Projects by Approximately 50% with Better Project Data Management

    - by Melissa Centurio Lopes
    LLC Uralelektrostroy was established in 1998, to meet the growing demand for reliable energy supply, which included the deployment and operation of a modern power grid system for Russia’s booming economy and industrial sector. To rise to the challenge, the country required a company with a strong reputation and the ability to strategically operate energy production and distribution facilities. As a renowned energy expert, Uralelektrostroy successfully embarked on the mission—focusing on the design, construction, and operation of power grids, transmission lines, and generation facilities. Today, Uralelektrostroy leads the Russian utilities industry with operations across the country, particularly in the Ural, Western Siberia, and Moscow regions. Challenges: Track work progress through all engineering project development stages with ease—from planning and start-up operations, to onsite construction and quality assurance—to enhance visibility into complex projects, such as power grid and power-transmission-line construction Implement and execute engineering projects faster—for example, designing and building power generation and distribution facilities—by better monitoring numerous local subcontractors Improve alignment of project schedules with project owners’ requirements—awarding federal and regional authorities—to avoid incurring fines for missing deadlines Solutions: Used Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 to streamline communication with customers and subcontractors through better data management and harmonized reporting, reducing construction project implementation and turnaround times by approximately 50%, on average Enabled fast generation of work-in-progress reports that track project schedules, budgets, materials, and staffing—from approval and material procurement, to construction and delivery Reduced the number of construction sites by nearly 30% (from 35 to 25) by identifying unprofitable sites—streamlining operations at the company’s construction site network and increasing profitability Improved project visibility by enabling managers to efficiently track project status, ensuring on-time reporting and punctual project deliveries to federal customers to reduce delay penalties to zero “Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 drastically changed the way we run our business. We’ve reduced the number of redundant assets, streamlined project implementation and execution, and improved collaboration with our customers and contractors. Overall, the Oracle deployment helped to increase our profitability.” – Roman Aleksandrovich Naumenko, Head of Information Technology, LLC Uralelektrostroy Read the complete customer snapshot here.

    Read the article

  • How do I interpolate air drag with a variable time step?

    - by Valentin Krummenacher
    So I have a little game which works with small steps, however those steps vary in time, so for example I sometimes have 10 Steps/second and then I have 20 Steps/second. This changes automatically depending on how many steps the user's computer can take. To avoid inaccurate positioning of the game's player object I use y=v0*dt+g*dt^2/2 to determine my objects y-position, where dt is the time since the last step, v0 is the velocity of my object in the beginning of my step and g is the gravity. To calculate the velocity in the end of a step I use v=v0+g*dt what also gives me correct results, independent of whether I use 2 steps with a dt of for example 20ms or one step with a dt of 40ms. Now I would like to introduce air drag. For simplicity's sake I use a=k*v^2 where a is the air drag's acceleration (I am aware that it would usually result in a force, but since I assume 1kg for my object's mass the force is the same as the resulting acceleration), k is a constant (in this case I'm using 0.001) and v is the speed. Now in an infinitely small time interval a is k multiplied by the velocity in this small time interval powered by 2. The problem is that v in the next time interval would depend on the drag of the last which again depends on the v of the last interval and so on... In other words: If I use a=k*v^2 I get different results for my position/velocity when I use 2 steps of 20ms than when I use one step of 40ms. I used to have this problem for my position too, but adding +g*dt^2/2 to the formula for my position fixed the problem since it takes into account that the position depends on the velocity which changes slightly in every infinitely small time interval. Does something like that exist for air drag too? And no, I dont mean anything like Adding air drag to a golf ball trajectory equation or similar, for that kind of method only gives correct results when all my steps are the same. (I hope you can understand my intermediate english, it's not my main language so I would like to say sorry for all the silly mistakes I might have made in my question)

    Read the article

  • GRUB2 stuck at rescue console, showing "unknown filesystem" for all partitions

    - by AndiDog
    I installed Ubuntu 12.04 on my external USB drive, where I have a 700GB NTFS partition followed by the new 6GB ext4 partition and a swap partition (all primary). The GRUB MBR is also installed to the external hard disk. Since my BIOS puts the external drive as first disk when booting, I removed my internal hard disk before installation in order to avoid ordering problems. Now when I boot from the external drive, GRUB is stuck at the rescue console with the error "unknown filesystem". grub rescue> ls (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) ls (hd0,<any of them>)/ gives me "unknown filesystem", thus also "insmod normal" GRUB doesn't seem to be able to read my Linux partition as you can see above?! How can I solve this? Additional info: bootinfoscript says (this is with the internal drive in again, but that does not make a difference): Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos2)/boot/grub on this drive. sdb1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdb2: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb3: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info:

    Read the article

  • MVVM Light V4 preview (BL0014) release notes

    - by Laurent Bugnion
    I just pushed to Codeplex an update to the MVVM Light source code. This is an early preview containing some of the features that I want to release later under the version 4. If you find these features useful for your project, please download the source code and build the assemblies. I will appreciate greatly any issue report. This version is labeled “V4.0.0.0/BL0014”. The “BL” string is an old habit that we used in my days at Siemens Building Technologies, called a “base level”. Somehow I like this way of incrementing the “base level” independently of any other consideration (such as alpha, beta, CTP, RTM etc) and continue to use it to tag my software versions. In Microsoft parlance, you could say that this is an early CTP of MVVM Light V4. Caveat The code is unit tested, but as we all know this does not mean that there are no bugs This code has not yet been used in production. Again, your help in testing this is greatly appreciated, so please report all bugs to me! What’s new? The following features have been implemented: Misc Various “maintenance work”. All WPF assemblies (that is .NET35 and .NET4) now allow partially trusted callers. It means that you can use them in am XBAP in partial trust mode. Testing Various test updates Added Windows Phone 7 unit tests Note: For Windows Phone 7, due to an issue in the unit test framework, not all tests can be executed. I had to isolate those tests for the moment. The error was reported to Microsoft. ViewModelBase The constructor is now public to allow serialization (especially useful on the phone to tombstone the state). ViewModelBase.MessengerInstance now returns Messenger.Default unless it is set explicitly. Previously, MessengerInstance was returning null, which was complicating the code. Two new ways to raise the PropertyChanged event have been added. See below for details. Messenger Updated the IMessenger interface with all public members from the Messenger class. Previously some members were missing. A new Unregister method is now available, allowing to unregister a recipient for a given token. RelayCommand RaiseCanExecuteChanged now acts the same in Windows Presentation Foundation than in Silverlight. In previous versions, I was relying on the CommandManager to raise the CanExecuteChanged event in WPF. However, it was found to be too unreliable, and a more direct way of raising the event was found preferable. See below for details. Raising the PropertyChanged event A very much requested update is now included: the ability to raise the PropertyChanged event in a viewmodel without using “magic strings”. Personally, I don’t see strings as a major issue, thanks to two features of the MVVM Light Toolkit: In the DEBUG configuration, every time that the RaisePropertyChanged method is called, the name of the property is checked against all existing properties of the viewmodel. Should the property name be misspelled (because of a typo or refactoring), an exception is thrown, notifying the developer that something is wrong. To avoid impacting the performance, this check is only made in DEBUG configuration, but that should be enough to warn the developers in case they miss a rename. The property name is defined as a public constant in the “mvvminpc” code snippet. This allows checking the property name from another class (for example if the PropertyChanged event is handled in the view). It also allows changing the property name in one place only. However, these two safeguards didn’t satisfy some of the users, who requested another way to raise the PropertyChanged event. In V4, you can now do the following: Using lambdas private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(() => MyProperty); } } This raises the property changed event using a lambda expression instead of the property name. Light reflection is used to get the name. This supports Intellisense and can easily be refactored. You can also broadcast a PropertyChangedMessage using the Messenger.Default instance with: private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(() => MyProperty, oldValue, value, true); } } Using no arguments When the RaisePropertyChanged method is called within a setter, you can also omit the property name altogether. This will fail if executed outside of the setter however. Also, to avoid confusion, there is no way to broadcast the PropertyChangedMessage using this syntax. private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(); } } The old way Of course the “old” way is still supported, without broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); } } And with broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(MyPropertyName, oldValue, value, true); } } Performance considerations It is notorious that using reflection takes more time than using a string constant to get the property name. However, after measuring for all platforms, I found the differences to be very small. I will measure more and submit the results to the community for evaluation, because some of the results are actually surprising (for example, using the Messenger to broadcast a PropertyChangedMessage does not significantly increase the time taken to raise the PropertyChanged event and update the bindings). For now, I submit this code to you, and would be delighted to hear about your own results. Raising the CanExecuteChanged event manually In WPF, until now, the CanExecuteChanged event for a RelayCommand was raised automatically. Or rather, it was attempted to be raised, using a feature that is only available in WPF called the CommandManager. This class monitors the UI and when something occurs, it queries the state of the CanExecute delegate for all the commands. However, this proved unreliable for the purpose of MVVM: Since very often the value of the CanExecute delegate changes according to non-UI events (for example something changing in the viewmodel or in the model), raising the CanExecuteChanged event manually is necessary. In Silverlight, the CommandManager does not exist, so we had to raise the event manually from the start. This proved more reliable, and I now changed the WPF implementation of the RaiseCanExecuteChanged method to be the exact same in WPF than in Silverlight. For instance, if a command must be enabled when a string property is set to a value other than null or empty string, you can do: public MainViewModel() { MyTestCommand = new RelayCommand( () => DoSomething(), () => !string.IsNullOrEmpty(MyProperty)); } public const string MyPropertyName = "MyProperty"; private string _myProperty = string.Empty; public string MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); MyTestCommand.RaiseCanExecuteChanged(); } } Logo update I made a minor change to the logo: Some people found the lack of the word “light” (as in MVVM Light Toolkit) confusing. I thought it was cool, because the feather suggests the idea of lightness, however I can see the point. So I added the word “light” to the logo. Things should be quite clear now. What’s next? This is only the first of a series of releases that will bring MVVM Light to V4. In the next weeks, I will continue to add some very requested features and correct some issues in the code. I will probably continue this fashion of releasing the changes to the public as source code through Codeplex. I would be very interested to hear what you think of that, and to get feedback about the changes. Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Finite Numbers and ExplorerCanvas

    - by PhubarBaz
    I was working on my online mathematical graphing application, CloudGraph, and trying to make it work in IE. The app uses the HTML5 canvas element to draw graphs. Since IE doesn't support canvas yet I use ExplorerCanvas to provide that support for IE. However, it seems that when using excanvas, if you try to moveTo or drawTo a point that is not finite it loses it's mind and stops drawing anything else after that. I had no such problems in Firefox or Chrome so it took me awhile to figure out what was going on. Next I discovered that I needed a way to check if a variable was NaN or Inifinity or any other non-finite value so I could avoid calling moveTo() in that case. I started writing a long if statement, then I thought there has to be a better way. Sure enough there was. There just happens to be an isFinite() function built into Javascript just for this purpose. Who knew! It works great. Another difference I discovered with excanvas is that you must specify a starting point using a moveTo() when beginning a drawing path. Again, Chrome and Firefox are a lot more forgiving in this area so it took me a while to figure out why my lines weren't drawing. But, all is happy now and I'm a little wiser to the ways of the canvas.

    Read the article

  • Call DB Stored Procedure using @NamedStoredProcedureQuery Injection

    - by anwilson
    Oracle Database Stored Procedure can be called from EJB business layer to perform complex DB specific operations. This approach will avoid overhead from frequent network hits which could impact end-user result. DB Stored Procedure can be invoked from EJB Session Bean business logic using org.eclipse.persistence.queries.StoredProcedureCall API. Using this approach requires more coding to handle the Session and Arguments of the Stored Procedure, thereby increasing effort on maintenance. EJB 3.0 introduces @NamedStoredProcedureQuery Injection to call Database Stored Procedure as NamedQueries. This blog will take you through the steps to call Oracle Database Stored Procedure using @NamedStoredProcedureQuery.EMP_SAL_INCREMENT procedure available in HR schema will be used in this sample.Create Entity from EMPLOYEES table.Add @NamedStoredProcedureQuery above @NamedQueries to Employees.java with definition as given below - @NamedStoredProcedureQuery(name="Employees.increaseEmpSal", procedureName = "EMP_SAL_INCREMENT", resultClass=void.class, resultSetMapping = "", returnsResultSet = false, parameters = { @StoredProcedureParameter(name = "EMP_ID", queryParameter = "EMPID"), @StoredProcedureParameter(name = "SAL_INCR", queryParameter = "SALINCR")} ) Observe how Stored Procedure's arguments are handled easily in  @NamedStoredProcedureQuery using @StoredProcedureParameter.Expose Entity Bean by creating a Session Facade.Business method need to be added to Session Bean to access the Stored Procedure exposed as NamedQuery. public void salaryRaise(Long empId, Long salIncrease) throws Exception { try{ Query query = em.createNamedQuery("Employees.increaseEmpSal"); query.setParameter("EMPID", empId); query.setParameter("SALINCR", salIncrease); query.executeUpdate(); } catch(Exception ex){ throw ex; } } Expose business method through Session Bean Remote Interface. void salaryRaise(Long empId, Long salIncrease) throws Exception; Session Bean Client is required to invoke the method exposed through remote interface.Call exposed method in Session Bean Client main method. final Context context = getInitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Your-JNDI-lookup"); sessionEJB.salaryRaise(new Long(200), new Long(1000)); Deploy Session BeanRun Session Bean Client.Salary of Employee with Id 200 will be increased by 1000.

    Read the article

  • Are scheduled job servers the right choice for a time sensitive game engine?

    - by maple_shaft
    I am currently architecting and designing an exciting new web application that will be entering into some areas that I have very little experience in, game development. The application is not necessarily a game, but there are some very time sensitive tasks and scheduled jobs that a server will need to run to perform game related activities (Eg. New match up starts at noon every day for a 12 day tournament, updating scoreboards at 5pm every day, etc...) In the past I have typically used cron jobs with the Quartz Scheduler running within a web application server, but I know that this isn't likely a scalable solution for the truly massive userbase that management is telling me to expect (Granted they are management and are probably highly optimistic about this) and also for how important the role of these tasks are in this web application. The other important thing I want to consider is that I want to avoid SPOF (Single Point Of Failure). If the primary job server goes down, another job server should be able to successfully run the job in its place. I suppose this can be done appropriately record locking and database transactions. My question is if scheduled jobs like CRON running on a web application server are a wise design choice given the time sensitive game tasks of this application, or is there something more appropriate for running a scalable game engine parallel to the web application servers?

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • Booting off a ZFS root in 14.04

    - by RJVB
    I've been running a Debian derivative (LMDE) on a ZFS root for half a year now. It was created by cloning a regular ext4-based install with all the necessary packages onto a ZFS pool, chrooting into that pool and recreating a grub menu and bootloader. The system uses an ext-3 dedicated /boot partition. I would like to do the same with Ubuntu 14.04, but have encountered several obstacles. There is no Trusty zfs-grub package The default grub package doesn't have ZFS support built in. I found a small bug in the build system responsible for that (report with patch created) and built my own grub packages. The built-in ZFS support is dysfunctional, it does not add the proper arguments to the kernel command line I thus installed the ZoL grub package I also use on my LMDE system, which does give me a correct grub.cfg However, even with that correct grub.cfg, the boot process apparently doesn't retrieve the bootfs parameter from the ZFS pool; instead the variable that's supposed to receive the value remains empty. As a result, initrd tries to load the default pool ("rpool"), which fails of course. I can however import the pool by hand, and complete the process by hand. If memory serves me well, I also had to disable apparmor, to avoid the boot process from blocking after importing the pool. Am I overlooking something? Just for comparison, I installed the Ubuntu 3.13 kernel on my LMDE system, and that works just fine (i.e. the identical kernel and grub binaries allow successful booting without glitches on LMDE but not on Ubuntu).

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >