Search Results

Search found 1915 results on 77 pages for 'identical'.

Page 53/77 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Best triple head display setup

    - by dgel
    I'm currently running Ubuntu 12.04 with a darn good triple head display setup. I've got a VisionTek 900530 Radeon HD 5450 512MB DDR3 PCI Express video card that has two DVI outputs and one Mini DisplayPort that I have connected to a HDMI adapter. I'm running three identical Asus 1920x1080 monitors that each have a DVI, VGA, and HDMI input. I'm using the xorg-edgers ppa, so I'm using the open source radeon driver version 6.99.99. I tried using the ATI binary fglrx driver, but I wasn't able to get the three monitors working properly- the monitor connected via HDMI / DisplayPort wouldn't run at full resolution. The setup is almost perfect: Compiz runs fine and is quite snappy. I'm not able to use that great compiz feature where you can drag a window to the side of a display and it will half maximize. I occasionally experience display corruption weirdness with Unity and need to restart it. When I use a dropdown menu in LibreOffice it often pops the menu down in another window. For example, if I'm using the center monitor and click the Insert menu, the menu pulls down on the monitor to my right, forcing me to chase it. If I chase down the menu and choose Manual Break, the dialog appears over on my left monitor. This absurdity is mildly entertaining but has lost its novelty. I've decided to build a new system and have spared no expense- latest i7 processor, SSD, etc. I really like the performance of the Nvidia binary drivers, so I put two ZOTAC ZT-40707-10L GeForce GT 440 in the system, figuring I'd have four DVI outputs and an awesome triple (or even eventually quad) head setup. Unfortunately it appears that I didn't do sufficient research before my purchase. It seems that Nvidia TwinView only supports two monitors on one card (I guess that's why they call it TwinView...). I messed around with running two X servers, but I really don't want that- being able to drag windows to any monitor is critical. It doesn't sound like Xinerama is an option because from what I understand it simply doesn't support Compiz. I've seen a BaseMosaic option that can be used with the Nvidia drivers that appears to support an almost unlimited number of heads- unfortunately me cheap little cards don't support it. I'm also not sure whether you'll still have all nice maximizing and snapping that TwinView provides, or whether Ubuntu will only see it as one massive display. I put my old trusty ATI card into my new system and installed 12.10. I'm using the opensource radeon drivers again because even in 12.10 I can't get the fglrx binary drivers to do triple head. Unfortunately, even with an unbelievably powerful system the experience is extremely sluggish (much more so than my experience in 12.04). The menu scattering problem appears to be fixed, but I get a lot of nasty Unity display corruption. So finally, my question is this: What hardware / drivers should I use? I'm willing to buy (almost) any video card(s). I have two PCI-Express 3.0 slots on my motherboard (which has an integrated Intel HD card). I'm willing to use ATI or Nvidia cards and willing to run Ubuntu 12.04.1 or 12.10. I'm not a gamer, but do want beautiful and snappy Compiz effects. Does anyone out there have the perfect triple head setup in 12.04 or 12.10? What hardware / drivers are you using? I have those two Nvidia cards but will probably be returning them unless someone knows a way to use them together for a triple head setup. Since I'm having pretty good luck with a single ATI card providing three displays, should I just buy a beefier one with the hopes that it will fix the horrible sluggishness I'm experiencing in 12.10?

    Read the article

  • Dont Throw Duplicate Exceptions

    In your code, youll sometimes have write code that validates input using a variety of checks.  Assuming you havent embraced AOP and done everything with attributes, its likely that your defensive coding is going to look something like this: public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new InvalidArgumentException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } Do you see a problem here?  Heres the deal Exceptions should be meaningful.  They have value at a number of levels: In the code, throwing an exception lets the develop know that there is an unsupported condition here In calling code, different types of exceptions may be handled differently At runtime, logging of exceptions provides a valuable diagnostic tool Its this last reason I want to focus on.  If you find yourself literally throwing the exact exception in more than one location within a given method, stop.  The stack trace for such an exception is likely going to be identical regardless of which path of execution led to the exception being thrown.  When that happens, you or whomever is debugging the problem will have to guess which exception was thrown.  Guessing is a great way to introduce additional problems and/or greatly increase the amount of time require to properly diagnose and correct any bugs related to this behavior. Dont Guess Be Specific When throwing an exception from multiple code paths within the code, be specific.  Virtually ever exception allows a custom message use it and ensure each case is unique.  If the exception might be handled differently by the caller, than consider implementing a new custom exception type.  Also, dont automatically think that you can improve the code by collapsing the if-then logic into a single call with short-circuiting (e.g. if(x == null || !x.IsValid()) ) that will guarantee that you cant easily throw different information into the message as easily as constructing the exception separately in each case. The code above might be refactored like so:   public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new ArgumentNullException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } In this case its taking advantage of the fact that there is already an ArgumentNullException in the framework, but if you didnt have an IsValid() method and were doing validation on your own, it might look like this: public void Foo(SomeClass someArgument) { if(someArgument.Quantity < 0) { throw new InvalidArgumentException("someArgument", "Quantity cannot be less than 0. Quantity: " + someArgument.Quantity); } if(someArgument.Quantity > 100) { throw new InvalidArgumentException("someArgument", "SomeArgument.Quantity cannot exceed 100. Quantity: " + someArgument.Quantity); }   // Do Real Work }   Note that in this last example, Im throwing the same exception type in each case, but with different Message values.  Im also making sure to include the value that resulted in the exception, as this can be extremely useful for debugging.  (How many times have you wished NullReferenceException would tell you the name of the variable it was trying to reference?) Dont add work to those who will follow after you to maintain your application (especially since its likely to be you).  Be specific with your exception messages follow DRY when throwing exceptions within a given method by throwing unique exceptions for each interesting case of invalid state. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • With a little effort you can &ldquo;SEMI&rdquo;-protect your C# assemblies with obfuscation.

    - by mbcrump
    This method will not protect your assemblies from a experienced hacker. Everyday we see new keygens, cracks, serials being released that contain ways around copy protection from small companies. This is a simple process that will make a lot of hackers quit because so many others use nothing. If you were a thief would you pick the house that has security signs and an alarm or one that has nothing? To so begin: Obfuscation is the concealment of meaning in communication, making it confusing and harder to interpret. Lets begin by looking at the cartoon below:     You are probably familiar with the term and probably ignored this like most programmers ignore user security. Today, I’m going to show you reflection and a way to obfuscate it. Please understand that I am aware of ways around this, but I believe some security is better than no security.  In this sample program below, the code appears exactly as it does in Visual Studio. When the program runs, you get either a true or false in a console window. Sample Program. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad")); //Returns a True or False depending if you have notepad running.             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any(clsProcess => clsProcess.ProcessName.Contains(name));         }     } }   Pretend, that this is a commercial application. The hacker will only have the executable and maybe a few config files, etc. After reviewing the executable, he can determine if it was produced in .NET by examing the file in ILDASM or Redgate’s Reflector. We are going to examine the file using RedGate’s Reflector. Upon launch, we simply drag/drop the exe over to the application. We have the following for the Main method:   and for the IsProcessOpen method:     Without any other knowledge as to how this works, the hacker could export the exe and get vs project build or copy this code in and our application would run. Using Reflector output. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad"));             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any<Process>(delegate(Process clsProcess)             {                 return clsProcess.ProcessName.Contains(name);             });         }       } } The code is not identical, but returns the same value. At this point, with a little bit of effort you could prevent the hacker from reverse engineering your code so quickly by using Eazfuscator.NET. Eazfuscator.NET is just one of many programs built for this. Visual Studio ships with a community version of Dotfoscutor. So download and load Eazfuscator.NET and drag/drop your exectuable/project into the window. It will work for a few minutes depending if you have a quad-core or not. After it finishes, open the executable in RedGate Reflector and you will get the following: Main After Obfuscation IsProcessOpen Method after obfuscation: As you can see with the jumbled characters, it is not as easy as the first example. I am aware of methods around this, but it takes more effort and unless the hacker is up for the challenge, they will just pick another program. This is also helpful if you are a consultant and make clients pay a yearly license fee. This would prevent the average software developer from jumping into your security routine after you have left. I hope this article helped someone. If you have any feedback, please leave it in the comments below.

    Read the article

  • A .NET Developers day with the iPad.

    - by mbcrump
    The Apple iPad is currently getting a lot of buzz because of the app store, the book store and of course iTunes. I had the chance to play with one and this is what I have learned about the device. Let’s get this out of the way first, the iPad is awesome. It is the device for media consumption and casual web browsing. But how does it measure up to those of us with .NET on our brains all days. Let’s find out… Main Screen – you can customize everything on this page. I guess I should replace that image with a C# or VS logo. Its pretty standard stuff if you have an iPhone.   Programming Books If you have a subscription to Safari Books Online, then you are in luck, its very easy to read the books on the iPad. Just fire up Safari web browser and goto the Safari Books Online. The biggest benefit that I can see with the iPad is the ability to read books wherever and not have to worry about purchasing books that I already have the .PDF for. Below is a sample from Code Complete 2nd Edition. Below is a PDF of the ECMA-334 C# Language Specification. As you can see its very readable and you should have no problem reading actual code.   Example of Code shown below: It is however easier to read the PDF and store them with a 3rd party PDF reader. I have seen several for .99 cents or less. You can however switch the screen to vertical to get more viewing space as shown below: I was disappointed with the iBooks application. I could not find a single .NET programming book anywhere. I was able to download the excellent sci-fi book “A memory of Wind” for free though. If I just overlooked them, then please email me with the names and titles. I couldn’t even find a technology category in the categories list. Web Surfing – Technical Sites Below is an example of my site in Safari. The code is very readable and the experience was identical to viewing it in Firefox. I tried multiple programming site and the pages looked great except those that used flash and of course it did not display on those pages.   News Apps - Technical Content The standard NY Times and USA Today looked great, but the Technical Content was lacking. It would probably be better to use Google Reader for online technical news.     YouTube Videos – Technical Content  Since its YouTube, we already know that a lot of technical content exist and it plays great on the iPad. I watched several programming videos and could clearly see the code being written. Taking Technical Notes The iPad comes with a great notepad for taking notes. I found that it was easy to take notes regarding projects that I am currently working on.   Calendar The calendar that ships with the iPad is great for organizing. You can setup exchange server or manually enter the information. Pretty standard stuff.    Random Applications that I like: TweetDeck.   and Adobe Ideas. Adobe Ideas is kinda like SketchFlow except you use your finger to mock up the sketches.  Don’t forget that the iPad is great for any type of podcasting. That pretty much sums it up, I would definitely recommend this device as it will only get better. I believe the iOS4 comes out on the 24th and the iPad will only get more and more apps. You could save a few bucks by waiting for the 2nd generation, but that’s a call that only you can make.

    Read the article

  • GameStateManagement and inputs not being recognized

    - by Dave Voyles
    EDIT: I've removed a bit of code from the input class to make this more readable, and updated my StartScreen class, which is now at the bottom. I have the same issues though, but they are explained in my comments on the bottom of this page. It won't let me paste my additional code here (the format comes out crazy), so I've linked to pastebin with the code pastebin I've been trying to implement the MS provided GameStateManagement sample with my game, but it has proven a bit difficult. Really, I'm using Oneksoft's Starter Kit, which uses the MS provided sample, so they are identical, except for my splash screen. I'm able to get the splash screen to launch, where it informs the player to press A to advance the screen, but this doesn't seem to accept any of my inputs. I’ve also added Console.Writeline(“Pressing A”) under the IsMenuPressed method in Input.cs to verify that it is getting called, but for some reason it is constantly spamming my log, rather than just appearing each time I press it. Not sure why this is happening. I have a bit too much code to post it all here, so I’ve attached a link to my .rar with my classes, but I’ll also leave a bit here which I thinkmay be applicable. https://www.dropbox.com/sh/6ek4uru2jc2ch0k/JTeBWN_3PQ What do you guys think the issue is? namespace Pong { public class Input { public const int MaxInputs = 4; public readonly KeyboardState[] CurrentKeyboardState; public readonly GamePadState[] CurrentGamePadState; public KeyboardState[] LastKeyboardState; public GamePadState[] LastGamePadState; public readonly bool[] GamePadWasConnected; public Input() { // Get input state CurrentKeyboardState = new KeyboardState[MaxInputs]; CurrentGamePadState = new GamePadState[MaxInputs]; // Preserving last states to check for isKeyUp events LastKeyboardState = CurrentKeyboardState; LastGamePadState = CurrentGamePadState; } /// <summary> /// Checks for a "menu select" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuSelect(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { Console.WriteLine("Pressing A"); return IsNewKeyPress(Keys.Space, controllingPlayer, out playerIndex) || IsNewKeyPress(Keys.Enter, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.A, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Start, controllingPlayer, out playerIndex); } /// <summary> /// Checks for a "menu cancel" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuCancel(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { return IsNewKeyPress(Keys.Escape, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.B, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Back, controllingPlayer, out playerIndex); }

    Read the article

  • Property overwrite behaviour

    - by jeremyj
    I thought it worth sharing about property overwrite behaviour because i found it confusing at first in the hope of preventing some learning pain for the uninitiated with MSBuild :-)The confusion for me came because of the redundancy of using a Condition statement in a _project_ level property to test that a property has not been previously set. What i mean is that the following two statements are always identical in behaviour, regardless if the property has been supplied on the command line -  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>has the same behaviour regardless of command line override as -  <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>  i.e. the two above property declarations have the same result whether the property is overridden on the command line or not.To prove this experiment with the following .proj file -<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="4.0" >  <PropertyGroup>    <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>  </PropertyGroup>  <Target Name="Target1">    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target2">    <PropertyGroup>      <PropA>PropA set in Target2</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target3">    <PropertyGroup>      <PropA Condition=" '$(PropA)' == '' ">PropA set in Target3</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target>  <Target Name="Target4">    <PropertyGroup>      <PropA Condition=" '$(PropA)' != '' ">PropA set in Target4</PropA>    </PropertyGroup>    <Message Text="PropA: $(PropA)"/>  </Target></Project>Try invoking it using both of the following invocations and observe its output -1)>msbuild blog.proj /t:Target1;Target2;Target3;Target42)>msbuild blog.proj /t:Target1;Target2;Target3;Target4 "/p:PropA=PropA set on command line"Then try those two invocations with the following three variations of specifying PropA at the project level -1)  <PropertyGroup>     <PropA Condition=" '$(PropA)' == '' ">PropA set at project level</PropA>   </PropertyGroup> 2)   <PropertyGroup>     <PropA>PropA set at project level</PropA>   </PropertyGroup>3)  <PropertyGroup>     <PropA Condition=" '$(PropA)' != '' ">PropA set at project level</PropA>   </PropertyGroup>

    Read the article

  • No access to Samba shares

    - by koanhead
    I have three shared folders in my local home directory- that is to say, on my Ubuntu desktop's /home/me/. All were set up using "Sharing Options" in Nautilus' right-click menu. The standard "Music" and "Videos" folders are configured identically: the "Guest Access" box is checked, but the "Allow others to create and delete" is not. The third folder, called "shared", is configured to not allow Guest access but to allow others to modify files. I have not altered /etc/samba/smb.conf by hand, I have only used Sharing Options to create and modify these so-called "shares". My roommates have two Windows 7 computers and one Ubuntu Netbook Remix netbook. I have the aforementioned desktop machine and laptop running 10.04. None of these machines can access any of the shares. Attempts to access the Guest shares result in the message \\machine\directory is not accessible. The network name could not be found. This is the error message generated by a VM running Windows 2000. The other Windows machines generate a similar error. The Ubuntu laptop gives the error Unable to mount location: Failed to mount Windows share. Hurrah, once again, for informative error messages. That really helps a lot. When attempting to browse the folder called "shared" from the laptop, I'm confronted with a password dialog. This behavior is the same will all machines I've tried in the situation. On entering my username and password for the account to which the shares belong, the password dialog briefly disappears and is replaced with an identical dialog. No error message, useful or not, appears. When attempting to browse this folder with the VM, the outcome is the same except that the password dialog helpfully states "incorrect username or password". My assumption is that the username and password in question is that of the user which owns the shares. I have tried all other username and password combinations available in this context and the outcome is the same. I would like to be able to share files. Sharing them with Windows machines is a nice feature, or would be if it was available. Really I consider sharing files between two machines with the same version of the same operating system kind of a minimum condition for network usability. Samba last functioned reliably for me more than ten years ago. I have attempted to use it on and off since then with only intermittent success. Oh, and "Personal File Sharing" from the Preferences menu does not result in an entry in Places → Network → my-server. In fact, the old entry "MY-SERVER" goes away and is replaced by "koanhead's public files on my-server", which when I attempt to open it from the laptop gives a "DBus.Error.NoReply: Message did not receive a reply." I know I come here and gripe about Ubuntu a lot, but on the other hand I spend literally hours every day trying to fix things in Ubuntu. It's a good system which aspires to greatness, which is why things like this either Need to work; or Be adequately documented. Ideally both would be the case. Anyway, rant over. Hopefully someone will have some insight on this issue. Thanks all who bother to read this wall o'text for your time.

    Read the article

  • Getting 2D Platformer entity collision Response Correct (side-to-side + jumping/landing on heads)

    - by jbrennan
    I've been working on a 2D (tile based) 2D platformer for iOS and I've got basic entity collision detection working, but there's just something not right about it and I can't quite figure out how to solve it. There are 2 forms of collision between player entities as I can tell, either the two players (human controlled) are hitting each other side-to-side (i. e. pushing against one another), or one player has jumped on the head of the other player (naturally, if I wanted to expand this to player vs enemy, the effects would be different, but the types of collisions would be identical, just the reaction should be a little different). In my code I believe I've got the side-to-side code working: If two entities press against one another, then they are both moved back on either side of the intersection rectangle so that they are just pushing on each other. I also have the "landed on the other player's head" part working. The real problem is, if the two players are currently pushing up against each other, and one player jumps, then at one point as they're jumping, the height-difference threshold that counts as a "land on head" is passed and then it registers as a jump. As a life-long player of 2D Mario Bros style games, this feels incorrect to me, but I can't quite figure out how to solve it. My code: (it's really Objective-C but I've put it in pseudo C-style code just to be simpler for non ObjC readers) void checkCollisions() { // For each entity in the scene, compare it with all other entities (but not with one it's already compared against) for (int i = 0; i < _allGameObjects.count(); i++) { // GameObject is an Entity GEGameObject *firstGameObject = _allGameObjects.objectAtIndex(i); // Don't check against yourself or any previous entity for (int j = i+1; j < _allGameObjects.count(); j++) { GEGameObject *secondGameObject = _allGameObjects.objectAtIndex(j); // Get the collision bounds for both entities, then see if they intersect // CGRect is a C-struct with an origin Point (x, y) and a Size (w, h) CGRect firstRect = firstGameObject.collisionBounds(); CGRect secondRect = secondGameObject.collisionBounds(); // Collision of any sort if (CGRectIntersectsRect(firstRect, secondRect)) { //////////////////////////////// // // // Check for jumping first (???) // // //////////////////////////////// if (firstRect.origin.y > (secondRect.origin.y + (secondRect.size.height * 0.7))) { // the top entity could be pretty far down/in to the bottom entity.... firstGameObject.didLandOnEntity(secondGameObject); } else if (secondRect.origin.y > (firstRect.origin.y + (firstRect.size.height * 0.7))) { // second entity was actually on top.... secondGameObject.didLandOnEntity.(firstGameObject); } else if (firstRect.origin.x > secondRect.origin.x && firstRect.origin.x < (secondRect.origin.x + secondRect.size.width)) { // Hit from the RIGHT CGRect intersection = CGRectIntersection(firstRect, secondRect); // The NUDGE just offsets either object back to the left or right // After the nudging, they are exactly pressing against each other with no intersection firstGameObject.nudgeToRightOfIntersection(intersection); secondGameObject.nudgeToLeftOfIntersection(intersection); } else if ((firstRect.origin.x + firstRect.size.width) > secondRect.origin.x) { // hit from the LEFT CGRect intersection = CGRectIntersection(firstRect, secondRect); secondGameObject.nudgeToRightOfIntersection(intersection); firstGameObject.nudgeToLeftOfIntersection(intersection); } } } } } I think my collision detection code is pretty close, but obviously I'm doing something a little wrong. I really think it's to do with the way my jumps are checked (I wanted to make sure that a jump could happen from an angle (instead of if the falling player had been at a right angle to the player below). Can someone please help me here? I haven't been able to find many resources on how to do this properly (and thinking like a game developer is new for me). Thanks in advance!

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • No access to Samba shares

    - by koanhead
    I have three shared folders in my local home directory- that is to say, on my Ubuntu desktop's /home/me/. All were set up using "Sharing Options" in Nautilus' right-click menu. The standard "Music" and "Videos" folders are configured identically: the "Guest Access" box is checked, but the "Allow others to create and delete" is not. The third folder, called "shared", is configured to not allow Guest access but to allow others to modify files. I have not altered /etc/samba/smb.conf by hand, I have only used Sharing Options to create and modify these so-called "shares". My roommates have two Windows 7 computers and one Ubuntu Netbook Remix netbook. I have the aforementioned desktop machine and laptop running 10.04. None of these machines can access any of the shares. Attempts to access the Guest shares result in the message \\machine\directory is not accessible. The network name could not be found. This is the error message generated by a VM running Windows 2000. The other Windows machines generate a similar error. The Ubuntu laptop gives the error Unable to mount location: Failed to mount Windows share. Hurrah, once again, for informative error messages. That really helps a lot. When attempting to browse the folder called "shared" from the laptop, I'm confronted with a password dialog. This behavior is the same will all machines I've tried in the situation. On entering my username and password for the account to which the shares belong, the password dialog briefly disappears and is replaced with an identical dialog. No error message, useful or not, appears. When attempting to browse this folder with the VM, the outcome is the same except that the password dialog helpfully states "incorrect username or password". My assumption is that the username and password in question is that of the user which owns the shares. I have tried all other username and password combinations available in this context and the outcome is the same. I would like to be able to share files. Sharing them with Windows machines is a nice feature, or would be if it was available. Really I consider sharing files between two machines with the same version of the same operating system kind of a minimum condition for network usability. Samba last functioned reliably for me more than ten years ago. I have attempted to use it on and off since then with only intermittent success. Oh, and "Personal File Sharing" from the Preferences menu does not result in an entry in Places → Network → my-server. In fact, the old entry "MY-SERVER" goes away and is replaced by "koanhead's public files on my-server", which when I attempt to open it from the laptop gives a "DBus.Error.NoReply: Message did not receive a reply." I know I come here and gripe about Ubuntu a lot, but on the other hand I spend literally hours every day trying to fix things in Ubuntu. It's a good system which aspires to greatness, which is why things like this either Need to work; or Be adequately documented. Ideally both would be the case. Anyway, rant over. Hopefully someone will have some insight on this issue. Thanks all who bother to read this wall o'text for your time.

    Read the article

  • Reactive Extensions vs FileSystemWatcher

    - by Joel Mueller
    One of the things that has long bugged me about the FileSystemWatcher is the way it fires multiple events for a single logical change to a file. I know why it happens, but I don't want to have to care - I just want to reparse the file once, not 4-6 times in a row. Ideally, there would be an event that only fires when a given file is done changing, rather than every step along the way. Over the years I've come up with various solutions to this problem, of varying degrees of ugliness. I thought Reactive Extensions would be the ultimate solution, but there's something I'm not doing right, and I'm hoping someone can point out my mistake. I have an extension method: public static IObservable<IEvent<FileSystemEventArgs>> GetChanged(this FileSystemWatcher that) { return Observable.FromEvent<FileSystemEventArgs>(that, "Changed"); } Ultimately, I would like to get one event per filename, within a given time period - so that four events in a row with a single filename are reduced to one event, but I don't lose anything if multiple files are modified at the same time. BufferWithTime sounds like the ideal solution. var bufferedChange = watcher.GetChanged() .Select(e => e.EventArgs.FullPath) .BufferWithTime(TimeSpan.FromSeconds(1)) .Where(e => e.Count > 0) .Select(e => e.Distinct()); When I subscribe to this observable, a single change to a monitored file triggers my subscription method four times in a row, which rather defeats the purpose. If I remove the Distinct() call, I see that each of the four calls contains two identical events - so there is some buffering going on. Increasing the TimeSpan passed to BufferWithTime seems to have no effect - I went as high as 20 seconds without any change in behavior. This is my first foray into Rx, so I'm probably missing something obvious. Am I doing it wrong? Is there a better approach? Thanks for any suggestions...

    Read the article

  • Using Visual Studio 2008 to Assemble, Link, Debug, and Execute MASM 6.11 Assembly Code

    - by Kreychek
    I would like to use Visual Studio 2008 to the greatest extent possible while effectively compiling/linking/building/etc code as if all these build processes were being done by the tools provided with MASM 6.11. The exact version of MASM does not matter, so long as it's within the 6.x range, as that is what my college is using to teach 16-bit assembly. I have done some research on the subject and have come to the conclusion that there are several options: Reconfigure VS to call the MASM 6.11 executables with the same flags, etc as MASM 6.11 would natively do. Create intermediary batch file(s) to be called by VS to then invoke the proper commands for MASM's linker, etc. Reconfigure VS's built-in build tools/rules (assembler, linker, etc) to provide an environment identical to the one used by MASM 6.11. Option (2) was brought up when I realized that the options available in VS's "External Tools" interface may be insufficient to correctly invoke MASM's build tools, thus a batch file to interpret VS's strict method of passing arguments might be helpful, as a lot of my learning about how to get this working involved my manually calling ML.exe, LINK.exe, etc from the command prompt. Below are several links that may prove useful in answering my question. Please keep in mind that I have read them all and none are the actual solution. I can only hope my specifying MASM 6.11 doesn't prevent anyone from contributing a perhaps more generalized answer. Similar method used to Option (2), but users on the thread are not contactable: http://www.codeguru.com/forum/archive/index.php/t-284051.html (also, I have my doubts about the necessity of an intermediary batch file) Out of date explanation to my question: http://www.cs.fiu.edu/~downeyt/cop3402/masmaul.html Probably the closest thing I've come to a definitive solution, but refers to a suite of tools from something besides MASM, also uses a batch file: http://www.kipirvine.com/asm/gettingStarted/index.htm#16-bit I apologize if my terminology for the tools used in each step of the code - exe process is off, but since I'm trying to reproduce the entirety of steps in between completion of writing the code and generating an executable, I don't think it matters much.

    Read the article

  • git pull gives error: 401 Authorization Required while accessing https://git.foo.com/bar.git

    - by spuder
    My macbook pro is able to clone/push/pull from the company git server. My cent 6.3 vm gets a 401 error git clone https://git.acme.com/git/torque-setup "error: The requested URL returned error: 401 Authorization Required while accessing https://git.acme.com/git/torque-setup/info/refs As a work around, I've tried creating a folder, with an empty repository, then setting the remote to the company server. I get the same error when trying a git pull The remotes are identical between the machines MacBook Pro (working) git --version git version 1.7.10.2 (Apple Git-33) git remote -v origin https://git.acme.com/git/torque-setup (fetch) origin https://git.acme.com/git/torque-setup (push) Cent 6.3 (not working) yum install -y git git --version git version 1.7.1 git remote -v origin https://git.acme.com/git/torque-setup (fetch) origin https://git.acme.com/git/torque-setup (push) The git server only allows https. Not git or ssh connections. Why is the macbook pro able to do a git pull, while the cent os machine can't? Solution Update 2013-5-15 As jku mentioned, the culprit is the old version of git installed on the cent box. Unfortunately, 1.7.1 is what you get when you run yum install git The work around is to manually install a newer version of git, or simply add the username to the repo git clone https://[email protected]/git/torque-setup

    Read the article

  • How to invoke the same msbuild target twice with different parameters from within msbuild project fi

    - by mark
    Dear ladies and sirs. I have the following piece of msbuild code: <PropertyGroup> <DirA>C:\DirA\</DirA> <DirB>C:\DirB\</DirB> </PropertyGroup> <Target Name="CopyToDirA" Condition="Exists('$(DirA)') AND '@(FilesToCopy)' != ''" Inputs="@(FilesToCopy)" Outputs="@(FilesToCopy -> '$(DirA)%(Filename)%(Extension)')"> <Copy SourceFiles="@(FilesToCopy)" DestinationFolder="$(DirA)" /> </Target> <Target Name="CopyToDirB" Condition="Exists('$(DirB)') AND '@(FilesToCopy)' != ''" Inputs="@(FilesToCopy)" Outputs="@(FilesToCopy -> '$(DirB)%(Filename)%(Extension)')"> <Copy SourceFiles="@(FilesToCopy)" DestinationFolder="$(DirB)" /> </Target> <Target Name="CopyFiles" DependsOnTargets="CopyToDirA;CopyToDirB"/> So invoking the target CopyFiles copies the relevant files to $(DirA) and $(DirB), provided they are not already there and up-to-date. But the targets CopyToDirA and CopyToDirB look identical except one copies to $(DirA) and the other - to $(DirB). Is it possible to unify them into one target first invoked with $(DirA) and then with $(DirB)? Thanks.

    Read the article

  • Using M2Crypto to save and load X509 certs in pem files

    - by Brock Pytlik
    I would expect that if I have a X509 cert as an object in memory, saved it as a pem file, then loaded it back in, I would end up with the same cert I started with. This seems not to be the case however. Let's call the original cert A, and the cert loaded from the pem file B. A.as_text() is identical to B.as_text(), but A.as_pem() differs from B.as_pem(). To say the least, I'm confused by this. As a side note, if A has been signed by another entity C, then A will verify against C's cert, but B will not. I've put together a tiny sample program to demonstrate what I'm seeing. When I run this, the second RuntimeError is raised. Thanks, Brock #!/usr/bin/python2.6 import M2Crypto as m2 import time cur_time = m2.ASN1.ASN1_UTCTIME() cur_time.set_time(int(time.time()) - 60*60*24) expire_time = m2.ASN1.ASN1_UTCTIME() # Expire certs in 1 hour. expire_time.set_time(int(time.time()) + 60 * 60 * 24) cs_rsa = m2.RSA.gen_key(1024, 65537, lambda: None) cs_pk = m2.EVP.PKey() cs_pk.assign_rsa(cs_rsa) cs_cert = m2.X509.X509() # These two seem the minimum necessary to make the as_text function call work # at all cs_cert.set_not_before(cur_time) cs_cert.set_not_after(expire_time) # This seems necessary to fill out the complete cert without errors. cs_cert.set_pubkey(cs_pk) # I've tried with the following set lines commented out and not commented. cs_name = m2.X509.X509_Name() cs_name.C = "US" cs_name.ST = "CA" cs_name.OU = "Fake Org CA 1" cs_name.CN = "www.fakeorg.dex" cs_name.Email = "[email protected]" cs_cert.set_subject(cs_name) cs_cert.set_issuer_name(cs_name) cs_cert.sign(cs_pk, md="sha256") orig_text = cs_cert.as_text() orig_pem = cs_cert.as_pem() print "orig_text:\n%s" % orig_text cs_cert.save_pem("/tmp/foo") tcs = m2.X509.load_cert("/tmp/foo") tcs_text = tcs.as_text() tcs_pem = tcs.as_pem() if orig_text != tcs_text: raise RuntimeError( "Texts were different.\nOrig:\n%s\nAfter load:\n%s" % (orig_text, tcs_text)) if orig_pem != tcs_pem: raise RuntimeError( "Pems were different.\nOrig:\n%s\nAfter load:\n%s" % (orig_pem, tcs_pem))

    Read the article

  • Using jquery validate with multiple fields of the same name

    - by Matt H
    I am trying to get jquery validate to work on multiple fields. Reason being I have dynamically generated fields added and they are simply a list of numbers. So I thought I'd put together a basic example and followed the concept from the accepted answer in the following link: http://stackoverflow.com/questions/931687/using-jquery-validate-plugin-to-validate-multiple-form-fields-with-identical-name However, it's not doing anything useful. Why is it not working? <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <script src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/validate/lib/jquery.delegate.js"></script> <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/validate/jquery.validate.js"></script> <script> $("#submit").click(function(){ $("field").each(function(){ $(this).rules("add", { required: true, email: true, messages: { required: "Specify a valid email" } }); }) }); $(document).ready(function(){ $("#myform").validate(); }); </script> </head> <body> <form id="myform"> <label for="field">Required, email: </label> <input class="left" id="field" name="field" /> <input class="left" id="field" name="field" /> <input class="left" id="field" name="field" /> <input class="left" id="field" name="field" /> <br/> <input type="submit" value="Validate!" id="submit" name="submit" /> </form> </body> </html>

    Read the article

  • Does Subversion have an analogue to VSS's links?

    - by bta
    I am migrating a Visual SourceSafe code repository to Subversion and I am running into a problem. Here is a simplified layout of our current source code tree (in VSS): project_root\ |-libs\ |-tools\ |-arch_1\ | |-include | |-source |-arch_2\ |-include |-source My problem is in our two arch_ folders. Each arch_ folder will be built for a different hardware architecture, but the contents of the two folders are practically identical. The files in arch_2 are merely VSS links to the files in arch_1, with only a small handful of exceptions. Work is generally checked into and out of the arch_1 folder, and the VSS links make sure that any code checked in here is updated in the arch_2 folder as well. Moving to Subversion, is there anything that will behave like VSS's links? That is, is there a way to have two files in separate folders magically associated with one another such that they will always be in sync with each other (changes to one will affect the other as well)? Note: I know the correct answer here is to fix the build system. The build system on this project was pieced together roughly a decade ago, back when our compiler/build system wasn't intelligent enough to compile the same folder full of source code for two different architectures. Thanks to make and updated compilers, we can re-write the build system to eliminate this dependency on two parallel source folders. However, this will take time that we don't have at the moment (we are losing our license to our VSS server and are being forced to migrate on rather short notice). I am hoping to find a Subversion solution to this problem because at the moment, our time would be much better spent making the migration run smoothly than re-writing the build system (which is next on my to-do list!). Thank you for your help!

    Read the article

  • How to add dynamic profile fields in Invision Power Board?

    - by user361908
    I run a game server and want to link the persons in game character name and stats to Invision Power Board. I've setup IPB so players currently login with their in game login. That means their username on the forum is the same as their username for the game. They can have multiple characters on 1 account so ideally I'd like to allow them to choose a main character and display an actual image of that character and allow them to display other characters if they are online. Currently I'm doing something like this by hacking profileFields.php but it's messy and not very efficient on the user or server end. My code currently uses 2 custom fields which the player can enter their character names in. To display only their main character they enter the name in the first field. To also display other characters if they are online they enter the same name into the second field. To resolve the IDs I have to run a lot of queries. I know PHP but I am not familiar with IPBs code at all. I just need pointed in a direction where I can combine the 2 fields into 1 field. tl;dr: Here is my setup: Invision Power Board 3 Data is stored in MySQL on the same server the forum is hosted on. Usernames on the forum are identical to usernames in the game Here is a breakdown of what I'd like to do: In the edit profile section I need to resolve the forum username to the games account id then: Display a list of characters and allow them to choose which characters they want to display if they are online as well as a default character that will be displayed if none are online. In the posts user info pane: Display the online character or the default if none are online. Here is what I need to know: How to generate a list of characters in the profile edit form and allow selection (checkbox) of each character to display as well as the selection of a default character (radio or dropdown?) How to fetch the data and place it in the posts user info pane

    Read the article

  • how to resolve this .Net 3.5 warning/error?

    - by 5YrsLaterDBA
    I have three machines. one installed VS2008 another two installed SDK6 and Framework3.5 (one of these two is a build machine). When I use MSBuild to build our application, all of them get this warning: C:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets : warning MSB3245: Could not resolve this reference. Could not locate the assembly "WPFToolkit, Version=3.5.40128.1, Culture=neutral, PublicKeyToken=31bf3856ad364e35". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. and the build machine comes with some errors: scsm\SCSM.cs(234,13): error CS1501: No overload for method 'Invoke' takes '1' arguments scsm\SCSM.cs(235,13): error CS1501: No overload for method 'Invoke' takes '1' arguments scsm\SCSM.cs(304,13): error CS1501: No overload for method 'Invoke' takes '1' arguments scsm\SCSM.cs(314,13): error CS1501: No overload for method 'Invoke' takes '1' arguments scsm\SCSM.cs(317,13): error CS1501: No overload for method 'Invoke' takes '1' arguments scsm\SCSM.cs(323,17): error CS1501: No overload for method 'Invoke' takes '1' arguments scsm\SCSM.cs(324,17): error CS1501: No overload for method 'Invoke' takes '1' arguments scsm\SCSM.cs(325,17): error CS1501: No overload for method 'Invoke' takes '1' arguments but other machines are passed without error. Resources are identical in those three machines. searched online but cannot find answer. Anybody here can help me resolve this? thanks

    Read the article

  • Schliemann's method of programming language learning

    - by DVK
    Background: 19th-century German archeologist Heinrich Schliemann was of course famous for his successful quest to find and excavate the city of Troy (an actual archeological site for the Troy of Homer's Iliad). However, he is just as famous for being an astonishing learner of languages - within the space of two years, he taught himself fluent Dutch, English, French, Spanish, Italian and Portuguese, and later went on to learn seven more, including both modern and ancient Greek. One of the methods he famously used was comparison of a known text, e.g. take a book in a language one is fluent in, take a good translation of a book in a language you wish to learn, and go over them in parallel. (various sources cited the book used by Schliemann to be the Bible, or, as the link above states, a novel). Now, for the actual question. Has anyone used (or heard of) an equivalent of Schliemann's method for learning a new programming language? E.g. instead of basing the leaning on references and tutorials, take a somewhat comprehensive set of programs known to have high-quality code in both languages implementing similar/identical algorithms and learn by comparing them? I'm curious about either personal experiences of applying such an approach, or references to something published, or existance of codebases which could be used for such an approach? What got me thinking about the idea was Project Euler and some code snippets I saw on SO, in C++, Perl and Lisp.

    Read the article

  • jQuery DataTables: Problems with POST Server Side JSON output

    - by Tim
    Hello Everyone, I am trying to get my datatable to take a POST JSON output from my server. This is my client side code <script> $(document).ready(function() { $('#example').dataTable( { "bProcessing": true, "bServerSide": true, "sAjaxSource": "http://localhost/staff/jobs/my_jobs", "fnServerData": function ( sSource, aoData, fnCallback ) { $.ajax( { "dataType": 'json', "type": "POST", "url": sSource, "data": aoData, "success": fnCallback } ); } } ); } ); </script> Now I have copied and pasted the server side code found in the DataTables examples found here. When I change my sAjaxSource to view this page the table doesn't move beyond 'processing'. When I view the JSON directly I see this output. {"sEcho": 1, "iTotalRecords": 1, "iTotalDisplayRecords": 1, "aaData": [ ["Trident","First Ever Job"]] } Just for fun I went to the POST server-side example and copied some of the JSON they are using for their example and just PHP echoed it straight out of another page. This is the output of that page. {"sEcho": 1, "iTotalRecords": 1, "iTotalDisplayRecords": 1, "aaData": [ ["Trident","Internet Explorer 4.0"]] } Here is where it gets interesting. The JSON that has been processed by the server fails to work yet the JSON simply echo'd by the same server on a different page does work... yet both are almost identical in outputs. I hope someone can shed some light on this because as the tree said to the lumberjack... I'm stumped. Thanks, Tim

    Read the article

  • cURL gets response with utf-8 BOM

    - by Reshat Belyalov
    In my script I send data with cURL, and enabled CURLOPT_RETURNTRANSFER. The response is json encoded data. When I'm trying to json_decode, it returns null. Then I found that response contains utf-8 BOM symbols at the beginning of string (). There is some experiments: $data = $data = curl_exec($ch); echo $data; the result is {"field_1":"text_1","field_2":"text_2","field_3":"text_3"} $data = $data = curl_exec($ch); echo mb_detect_encoding($data); result - UTF-8 $data = $data = curl_exec($ch); echo mb_convert_encoding($data, 'UTF-8', mb_detect_encoding($data)); // identical to echo mb_convert_encoding($data, 'UTF-8', 'UTF-8'); result - {"field_1":"text_1","field_2":"text_2","field_3":"text_3"} The one thing that helps is removing first 3 symbols: if (substr($data, 0, 3) == pack('CCC', 239, 187, 191)) { $data = substr($data, 3); } But what if there will be another BOM? So the question is: How to detect right encoding of cURL response? OR how to detect what BOM has arrrived? Or maybe how to convert the response with BOM? Thanks.

    Read the article

  • Is there a problem with scrollTop in Chrome?

    - by Shaun
    I am setting scrollTop and scrollLeft for a div that I am working with. The code looks like this: div.scrollLeft = content.cx*scalar - parseInt(div.style.width)/2; div.scrollTop = content.cy*scalar - parseInt(div.style.height)/2; This works just fine in FF, but only scrollLeft works in chrome. As you can see, the two use almost identical equations and as it works in FF I am just wondering if this is a problem with Chrome? Update: If I switch the order of the assignments then scrollTop will work and scrollLeft won't. <div id="container" style = "height:600px; width:600px; overflow:auto;" onscroll = "updateCenter()"> <script> var div = document.getElementById('container'); function updateCenter() { svfdim.cx = (div.scrollLeft + parseFloat(div.style.width)/2)/scalar; svfdim.cy = (div.scrollTop + parseFloat(div.style.height)/2)/scalar; } function updateScroll(svfdim, scalar, div) { div.scrollTop = svgdim.cy*scalar - parseFloat(div.style.height)/2; div.scrollLeft = svgdim.cx*scalar - parseFloat(div.style.width)/2; } function resizeSVG(Root) { Root.setAttribute("height", svfdim.height*scalar); Root.setAttribute("width", svfdim.width*scalar); updateScroll(svgdim, scalar, div); } </script>

    Read the article

  • Rotate a Swing JLabel

    - by Johannes Rössel
    I am currently trying to implement a Swing component, inheriting from JLabel which should simply represent a label that can be oriented vertically. Beginning with this: public class RotatedLabel extends JLabel { public enum Direction { HORIZONTAL, VERTICAL_UP, VERTICAL_DOWN } private Direction direction; I thought it's be a nice idea to just alter the results from getPreferredSize(): @Override public Dimension getPreferredSize() { // swap size for vertical alignments switch (getDirection()) { case VERTICAL_UP: case VERTICAL_DOWN: return new Dimension(super.getPreferredSize().height, super .getPreferredSize().width); default: return super.getPreferredSize(); } } and then simply transform the Graphics object before I offload painting to the original JLabel: @Override protected void paintComponent(Graphics g) { Graphics2D gr = (Graphics2D) g.create(); switch (getDirection()) { case VERTICAL_UP: gr.translate0, getPreferredSize().getHeight()); gr.transform(AffineTransform.getQuadrantRotateInstance(-1)); break; case VERTICAL_DOWN: // TODO break; default: } super.paintComponent(gr); } } It seems to work—somehow—in that the text is now displayed vertically. However, placement and size are off: Actually, the width of the background (orange in this case) is identical with the height of the surrounding JFrame which is ... not quite what I had in mind. Any ideas how to solve that in a proper way? Is delegating rendering to superclasses even encouraged?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >