Search Results

Search found 7346 results on 294 pages for 'touch flo 3d'.

Page 138/294 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Separate collision mesh model?

    - by Menno Gouw
    I want to have another go at 3D within XNA. What I have seen from some other games that they just have a separate very low poly model "cage" around the environment model. However I can not find any reference to this. I have not that much experience with XNA 3D either. Is it possible to have this cage within each of my environmental models already? Lets just say I call the mesh within the .FBX wall and col_wall. How would I call to these different meshes within XNA? The player would just have a tight collision cube around. To make it a bit more efficient I will be making divide the map up by cubes and only calculate collision if the player is in it. Question two: I can't find anywhere to do cube vs mesh collision. Is there a method for this? Or perhaps it is possible to build my collision cage out of cubes in the 3D app and on loading of the models in XNA replace them directly by cubes? So I could just do box to box collision which should be very cheap and still give the player the ability to move over ledges on the static models.

    Read the article

  • How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics?

    - by PeterDC
    Background: I'm a 3D artist (as a hobby) and have recently started using Ubuntu 12.04 LTS as a dual-boot with Windows 7. It's running on my a fairly new 64-bit Toshiba laptop with an nVidia GeForce GT 540M GPU (graphics card). It also, however has Intel Integrated Graphics (which I suspect Ubuntu's been using). So, when I render my 3D scenes to images on Windows, I am able to choose between using my CPU or my nVidia GPU (faster). From the 3D application, I can set the GPU to use either CUDA or OpenCL. In Ubuntu, there's no GPU option. After doing (too much?) research on the issues with Linux and the nVidia Optimus technology, I am slightly more enlightened, but a lot more confused. I don't care one bit about the Optimus technology, as battery life is not by any means an issue for me. Here's my question: What can I do to be able to use CUDA-utilizing programs (such as Blender) on my nVidia GPU in Ubuntu? Will I need nVidia drivers? (I have heard they don't play nicely with Optimus setups on Linux.) Is there at least a way to use OpenCL on my GPU in Ubuntu?

    Read the article

  • CGAffineTransformMakeRotation goes the other way after 180 degrees (-3.14)

    - by TheKillerDev
    So, i am trying to do a very simple disc rotation (2d), according to the user touch on it, just like a DJ or something. It is working, but there is a problem, after certain amount of rotation, it starts going backwards, this amount is after 180 degrees or as i saw in while logging the angle, -3.14 (pi). I was wondering, how can i achieve a infinite loop, i mean, the user can keep rotating and rotating to any side, just sliding his finger? Also a second question is, is there any way to speed up the rotation? Here is my code right now: #import <UIKit/UIKit.h> @interface Draggable : UIImageView { CGPoint firstLoc; UILabel * fred; double angle; } @property (assign) CGPoint firstLoc; @property (retain) UILabel * fred; @end @implementation Draggable @synthesize fred, firstLoc; - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; angle = 0; if (self) { // Initialization code } return self; } -(void)handleObject:(NSSet *)touches withEvent:(UIEvent *)event isLast:(BOOL)lst { UITouch *touch =[[[event allTouches] allObjects] lastObject]; CGPoint curLoc = [touch locationInView:self]; float fromAngle = atan2( firstLoc.y-self.center.y, firstLoc.x-self.center.x ); float toAngle = atan2( curLoc.y-(self.center.y+10), curLoc.x-(self.center.x+10)); float newAngle = angle + (toAngle - fromAngle); NSLog(@"%f",newAngle); CGAffineTransform cgaRotate = CGAffineTransformMakeRotation(newAngle); self.transform = cgaRotate; if (lst) angle = newAngle; } -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch =[[[event allTouches] allObjects] lastObject]; firstLoc = [touch locationInView:self]; }; -(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [self handleObject:touches withEvent:event isLast:NO]; }; -(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [self handleObject:touches withEvent:event isLast:YES]; } @end And in the ViewController: UIImage *tmpImage = [UIImage imageNamed:@"theDisc.png"]; CGRect cellRectangle; cellRectangle = CGRectMake(-1,self.view.frame.size.height,tmpImage.size.width ,tmpImage.size.height ); dragger = [[Draggable alloc] initWithFrame:cellRectangle]; [dragger setImage:tmpImage]; [dragger setUserInteractionEnabled:YES]; dragger.layer.anchorPoint = CGPointMake(.5,.5); [self.view addSubview:dragger]; I am open to new/cleaner/more correct ways of doing this too. Thanks in advance.

    Read the article

  • iPhone SDK vs. Windows Phone 7 Series SDK Challenge, Part 2: MoveMe

    In this series, I will be taking sample applications from the iPhone SDK and implementing them on Windows Phone 7 Series.  My goal is to do as much of an apples-to-apples comparison as I can.  This series will be written to not only compare and contrast how easy or difficult it is to complete tasks on either platform, how many lines of code, etc., but Id also like it to be a way for iPhone developers to either get started on Windows Phone 7 Series development, or for developers in general to learn the platform. Heres my methodology: Run the iPhone SDK app in the iPhone Simulator to get a feel for what it does and how it works, without looking at the implementation Implement the equivalent functionality on Windows Phone 7 Series using Silverlight. Compare the two implementations based on complexity, functionality, lines of code, number of files, etc. Add some functionality to the Windows Phone 7 Series app that shows off a way to make the scenario more interesting or leverages an aspect of the platform, or uses a better design pattern to implement the functionality. You can download Microsoft Visual Studio 2010 Express for Windows Phone CTP here, and the Expression Blend 4 Beta here. If youre seeing this series for the first time, check out Part 1: Hello World. A note on methodologyin the prior post there was some feedback about lines of code not being a very good metric for this exercise.  I dont really disagree, theres a lot more to this than lines of code but I believe that is a relevant metric, even if its not the ultimate one.  And theres no perfect answer here.  So I am going to continue to report the number of lines of code that I, as a developer would need to write in these apps as a data point, and Ill leave it up to the reader to determine how that fits in with overall complexity, etc.  The first example was so basic that I think it was difficult to talk about in real terms.  I think that as these apps get more complex, the subjective differences in concept count and will be more important.  MoveMe The MoveMe app is the main end-to-end app writing example in the iPhone SDK, called Creating an iPhone Application.  This application demonstrates a few concepts, including handling touch input, how to do animations, and how to do some basic transforms. The behavior of the application is pretty simple.  User touches the button: The button does a throb type animation where it scales up and then back down briefly. User drags the button: After a touch begins, moving the touch point will drag the button around with the touch. User lets go of the button: The button animates back to its original position, but does a few small bounces as it reaches its original point, which makes the app fun and gives it an extra bit of interactivity. Now, how would I write an app that meets this spec for Windows Phone 7 Series, and how hard would it be?  Lets find out!     Implementing the UI Okay, lets build the UI for this application.  In the HelloWorld example, we did all the UI design in Visual Studio and/or by hand in XAML.  In this example, were going to use the Expression Blend 4 Beta. You might be wondering when to use Visual Studio, when to use Blend, and when to do XAML by hand.  Different people will have different takes on this, but heres mine: XAML by hand simple UI that doesnt contain animations, gradients, etc., and or UI that I want to really optimize and craft when I know exactly what I want to do. Visual Studio Basic UI layout, property setting, data binding, etc. Blend Any serious design work needs to be done in Blend, including animations, handling states and transitions, styling and templating, editing resources. As in Part 1, go ahead and fire up Visual Studio 2010 Express for Windows Phone (yes, soon it will take longer to say the name of our products than to start them up!), and create a new Windows Phone Application.  As in Part 1, clear out the XAML from the designer.  An easy way to do this is to just: Click on the design surface Hit Control+A Hit Delete Theres a little bit left over (the Grid.RowDefinitions element), just go ahead and delete that element so were starting with a clean state of only one outer Grid element. To use Blend, we need to save this project.  See, when you create a project with Visual Studio Express, it doesnt commit it to the disk (well, in a place where you can find it, at least) until you actually save the project.  This is handy if youre doing some fooling around, because it doesnt clutter your disk with WindowsPhoneApplication23-like directories.  But its also kind of dangerous, since when you close VS, if you dont save the projectits all gone.  Yes, this has bitten me since I was saving files and didnt remember that, so be careful to save the project/solution via Save All, at least once. So, save and note the location on disk.  Start Expression Blend 4 Beta, and chose File > Open Project/Solution, and load your project.  You should see just about the same thing you saw over in VS: a blank, black designer surface. Now, thinking about this application, we dont really need a button, even though it looks like one.  We never click it.  So were just going to create a visual and use that.  This is also true in the iPhone example above, where the visual is actually not a button either but a jpg image with a nice gradient and round edges.  Well do something simple here that looks pretty good. In Blend, look in the tool pane on the left for the icon that looks like the below (the highlighted one on the left), and hold it down to get the popout menu, and choose Border:    Okay, now draw out a box in the middle of the design surface of about 300x100.  The Properties Pane to the left should show the properties for this item. First, lets make it more visible by giving it a border brush.  Set the BorderBrush to white by clicking BorderBrush and dragging the color selector all the way to the upper right in the palette.  Then, down a bit farther, make the BorderThickness 4 all the way around, and the CornerRadius set to 6. In the Layout section, do the following to Width, Height, Horizontal and Vertical Alignment, and Margin (all 4 margin values): Youll see the outline now is in the middle of the design surface.  Now lets give it a background color.  Above BorderBrush select Background, and click the third tab over: Gradient Brush.  Youll see a gradient slider at the bottom, and if you click the markers, you can edit the gradient stops individually (or add more).  In this case, you can select something you like, but wheres what I chose: Left stop: #BFACCFE2 (I just picked a spot on the palette and set opacity to 75%, no magic here, feel free to fiddle these or just enter these numbers into the hex area and be done with it) Right stop: #FF3E738F Okay, looks pretty good.  Finally set the name of the element in the Name field at the top of the Properties pane to welcome. Now lets add some text.  Just hit T and itll select the TextBlock tool automatically: Now draw out some are inside our welcome visual and type Welcome!, then click on the design surface (to exit text entry mode) and hit V to go back into selection mode (or the top item in the tool pane that looks like a mouse pointer).  Click on the text again to select it in the tool pane.  Just like the border, we want to center this.  So set HorizontalAlignment and VerticalAlignment to Center, and clear the Margins: Thats it for the UI.  Heres how it looks, on the design surface: Not bad!  Okay, now the fun part Adding Animations Using Blend to build animations is a lot of fun, and its easy.  In XAML, I can not only declare elements and visuals, but also I can declare animations that will affect those visuals.  These are called Storyboards. To recap, well be doing two animations: The throb animation when the element is touched The center animation when the element is released after being dragged. The throb animation is just a scale transform, so well do that first.  In the Objects and Timeline Pane (left side, bottom half), click the little + icon to add a new Storyboard called touchStoryboard: The timeline view will appear.  In there, click a bit to the right of 0 to create a keyframe at .2 seconds: Now, click on our welcome element (the Border, not the TextBlock in it), and scroll to the bottom of the Properties Pane.  Open up Transform, click the third tab ("Scale), and set X and Y to 1.2: This all of this says that, at .2 seconds, I want the X and Y size of this element to scale to 1.2. In fact you can see this happen.  Push the Play arrow in the timeline view, and youll see the animation run! Lets make two tweaks.  First, we want the animation to automatically reverse so it scales up then back down nicely. Click in the dropdown that says touchStoryboard in Objects and Timeline, then in the Properties pane check Auto Reverse: Now run it again, and youll see it go both ways. Lets even make it nicer by adding an easing function. First, click on the Render Transform item in the Objects tree, then, in the Property Pane, youll see a bunch of easing functions to choose from.  Feel free to play with this, then seeing how each runs.  I chose Circle In, but some other ones are fun.  Try them out!  Elastic In is kind of fun, but well stick with Circle In.  Thats it for that animation. Now, we also want an animation to move the Border back to its original position when the user ends the touch gesture.  This is exactly the same process as above, but just targeting a different transform property. Create a new animation called releaseStoryboard Select a timeline point at 1.2 seconds. Click on the welcome Border element again Scroll to the Transforms panel at the bottom of the Properties Pane Choose the first tab (Translate), which may already be selected Set both X and Y values to 0.0 (we do this just to make the values stick, because the value is already 0 and we need Blend to know we want to save that value) Click on RenderTransform in the Objects tree In the properties pane, choose Bounce Out Set Bounces to 6, and Bounciness to 4 (feel free to play with these as well) Okay, were done. Note, if you want to test this Storyboard, you have to do something a little tricky because the final value is the same as the initial value, so playing it does nothing.  If you want to play with it, do the following: Next to the selection dropdown, hit the little "x (Close Storyboard) Go to the Translate Transform value for welcome Set X,Y to 50, 200, respectively (or whatever) Select releaseStoryboard again from the dropdown Hit play, see it run Go into the object tree and select RenderTransform to change the easing function. When youre done, hit the Close Storyboard x again and set the values in Transform/Translate back to 0 Wiring Up the Animations Okay, now go back to Visual Studio.  Youll get a prompt due to the modification of MainPage.xaml.  Hit Yes. In the designer, click on the welcome Border element.  In the Property Browser, hit the Events button, then double click each of ManipulationStarted, ManipulationDelta, ManipulationCompleted.  Youll need to flip back to the designer from code, after each double click. Its code time.  Here we go. Here, three event handlers have been created for us: welcome_ManipulationStarted: This will execute when a manipulation begins.  Think of it as MouseDown. welcome_ManipulationDelta: This executes each time a manipulation changes.  Think MouseMove. welcome_ManipulationCompleted: This will  execute when the manipulation ends. Think MouseUp. Now, in ManipuliationStarted, we want to kick off the throb animation that we called touchAnimation.  Thats easy: 1: private void welcome_ManipulationStarted(object sender, ManipulationStartedEventArgs e) 2: { 3: touchStoryboard.Begin(); 4: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Likewise, when the manipulation completes, we want to re-center the welcome visual with our bounce animation: 1: private void welcome_ManipulationCompleted(object sender, ManipulationCompletedEventArgs e) 2: { 3: releaseStoryboard.Begin(); 4: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note there is actually a way to kick off these animations from Blend directly via something called Triggers, but I think its clearer to show whats going on like this.  A Trigger basically allows you to say When this event fires, trigger this Storyboard, so its the exact same logical process as above, but without the code. But how do we get the object to move?  Well, for that we really dont want an animation because we want it to respond immediately to user input. We do this by directly modifying the transform to match the offset for the manipulation, and then well let the animation bring it back to zero when the manipulation completes.  The manipulation events do a great job of keeping track of all the stuff that you usually had to do yourself when doing drags: where you started from, how far youve moved, etc. So we can easily modify the position as below: 1: private void welcome_ManipulationDelta(object sender, ManipulationDeltaEventArgs e) 2: { 3: CompositeTransform transform = (CompositeTransform)welcome.RenderTransform; 4:   5: transform.TranslateX = e.CumulativeManipulation.Translation.X; 6: transform.TranslateY = e.CumulativeManipulation.Translation.Y; 7: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Thats it! Go ahead and run the app in the emulator.  I suggest running without the debugger, its a little faster (CTRL+F5).  If youve got a machine that supports DirectX 10, youll see nice smooth GPU accelerated graphics, which also what it looks like on the phone, running at about 60 frames per second.  If your machine does not support DX10 (like the laptop Im writing this on!), it wont be quite a smooth so youll have to take my word for it! Comparing Against the iPhone This is an example where the flexibility and power of XAML meets the tooling of Visual Studio and Blend, and the whole experience really shines.  So, for several things that are declarative and 100% toolable with the Windows Phone 7 Series, this example does them with code on the iPhone.  In parens is the lines of code that I count to do these operations. PlacardView.m: 19 total LOC Creating the view that hosts the button-like image and the text Drawing the image that is the background of the button Drawing the Welcome text over the image (I think you could technically do this step and/or the prior one using Interface Builder) MoveMeView.m:  63 total LOC Constructing and running the scale (throb) animation (25) Constructing the path describing the animation back to center plus bounce effect (38) Beyond the code count, yy experience with doing this kind of thing in code is that its VERY time intensive.  When I was a developer back on Windows Forms, doing GDI+ drawing, we did this stuff a lot, and it took forever!  You write some code and even once you get it basically working, you see its not quite right, you go back, tweak the interval, or the math a bit, run it again, etc.  You can take a look at the iPhone code here to judge for yourself.  Scroll down to animatePlacardViewToCenter toward the bottom.  I dont think this code is terribly complicated, but its not what Id call simple and its not at all simple to get right. And then theres a few other lines of code running around for setting up the ViewController and the Views, about 15 lines between MoveMeAppDelegate, PlacardView, and MoveMeView, plus the assorted decls in the h files. Adding those up, I conservatively get something like 100 lines of code (19+63+15+decls) on iPhone that I have to write, by hand, to make this project work. The lines of code that I wrote in the examples above is 5 lines of code on Windows Phone 7 Series. In terms of incremental concept counts beyond the HelloWorld app, heres a shot at that: iPhone: Drawing Images Drawing Text Handling touch events Creating animations Scaling animations Building a path and animating along that Windows Phone 7 Series: Laying out UI in Blend Creating & testing basic animations in Blend Handling touch events Invoking animations from code This was actually the first example I tried converting, even before I did the HelloWorld, and I was pretty surprised.  Some of this is luck that this app happens to match up with the Windows Phone 7 Series platform just perfectly.  In terms of time, I wrote the above application, from scratch, in about 10 minutes.  I dont know how long it would take a very skilled iPhone developer to write MoveMe on that iPhone from scratch, but if I was to write it on Silverlight in the same way (e.g. all via code), I think it would likely take me at least an hour or two to get it all working right, maybe more if I ended up picking the wrong strategy or couldnt get the math right, etc. Making Some Tweaks Silverlight contains a feature called Projections to do a variety of 3D-like effects with a 2D surface. So lets play with that a bit. Go back to Blend and select the welcome Border in the object tree.  In its properties, scroll down to the bottom, open Transform, and see Projection at the bottom.  Set X,Y,Z to 90.  Youll see the element kind of disappear, replaced by a thin blue line. Now Create a new animation called startupStoryboard. Set its key time to .5 seconds in the timeline view Set the projection values above to 0 for X, Y, and Z. Save Go back to Visual Studio, and in the constructor, add the following bold code (lines 7-9 to the constructor: 1: public MainPage() 2: { 3: InitializeComponent(); 4:   5: SupportedOrientations = SupportedPageOrientation.Portrait; 6:   7: this.Loaded += (s, e) => 8: { 9: startupStoryboard.Begin(); 10: }; 11: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If the code above looks funny, its using something called a lambda in C#, which is an inline anonymous method.  Its just a handy shorthand for creating a handler like the manipulation ones above. So with this youll get a nice 3D looking fly in effect when the app starts up.  Here it is, in flight: Pretty cool!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Crop, Edit, and Print Photos in Windows 7 Media Center

    - by DigitalGeekery
    Windows Media Center is a nice application for managing and displaying your personal photos, but you may occasionally need to make some basic edits to your pictures. Today we’ll take a look at how to crop, edit, and print photos right from Windows 7 Media Center. From within the Picture Library in Windows Media Center, choose a photo to work with, right-click and select Picture Details. You can also access this option with a Media Center remote by clicking the “i” button. Note: You’ll notice you have the option to rotate the picture from this menu. It is also available on the next screen.  Rotate a picture Now you’ll see more options on the Picture Details screen. From here you can rotate, Print, or Touch Up, Delete, or Burn a CD/DVD. To rotate the picture, simple select Rotate. Note: If you want your photo saved with the new orientation, you’ll need to select Save from the Touch Up screen that we will look at later in the article.   Each click will rotate the picture 90 degrees clockwise. You’ll see the new orientation of the picture displayed on the Picture Details screen after you have clicked Rotate. Print a picture From the Picture Details screen, select Print. Click Print again. Media Center automatically prints to your default printer, so make sure your desired target printer is set as default. Crop and Edit Photos To edit or crop your photo, select Touch Up. Touch Up options includes, Crop, Contrast, and Red Eye removal. First, we’ll select the Crop button to crop our photo.   You will see a cropping area overlay appear on your photo. Select one of the buttons below to adjust the location, size, and orientation of the area to be cropped. When you’re happy with your selection, click Save. You’ll be prompted to confirm your save. Click Yes to permanently save your edits. You can also apply Contrast or Red Eye adjustments to your photos. There aren’t any advanced settings for these options. You merely toggle the Contrast or Red Eye on or off by selecting the option. Be sure to click Save before exiting to if you’ve made any changes you wish to permanently apply to the photos. This includes rotating the images. While this method is not likely to be replace your favorite image editing software, it does give you the ability to make basic edits and print photos directly from Windows Media Center. With a Media Center remote, you can even do all your edits from the comfort of your recliner. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program GuideIntegrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird

    Read the article

  • Silverlight Cream for March 10, 2011 -- #1058

    - by Dave Campbell
    In this Issue: Ian T. Lackey, Peter Kuhn, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), Martin Krüger, John Papa, Jeremy Likness, Karl Shifflett, and Colin Eberhardt. Above the Fold: Silverlight: "Silverlight TV 65: 3D Graphics" John Papa WP7: "Developing a Windows Phone 7 Jump List Control" Colin Eberhardt Shoutouts: Telerik announced a special sale on their RadControls for WP7... check it out: RadControls for Windows Phone 7 - on Sale from March 16th at a Special Promo Price! From SilverlightCream.com: Prism BootStrapper Load ModuleCatalog Ansyc Ian T. Lackey has a post up about reading the module catalog for Prism from an XML file asynchronously... fun stuff... this is how we kick-started our app... XNA for Silverlight developers: Part 6 - Input (accelerometer) Peter Kuhn has Part 6 of his XNA for Silverlight devs up at SilverlightShow. This post is on the use of the accelerometer... some great diagrams and explanations of it's use along with some code to play with... including a 'problems and pitfalls' section, and some good external links. Getting Started with Unit Testing in Silverlight for WP7 WindowsPhoneGeek has an introduction to Unit Testing in general, and then moves into Unit Testing in Silverlight for WP7, providing 3 options with links to the materials and code demonstrating the concepts. Using DockPanel in WP7 Responding to reader's questions, WindowsPhoneGeek's next post is on the DockPanel from the Silverlight Toolkit, and using it in WP7... defined declaratively and in code. Reactive Extensions–More About Chaining Jesse Liberty has post number 10 on Rx up and is a follow-on to the last one on Chaining. This time he exercises the chaining aspect of SelectMany. Yet Another Podcast #26–Walt Ritscher In his next post, Jesse Liberty has his 26th 'Yet Another Podcast' up and is chatting with my friend Walt Ritscher. If you don't know who Walt is, check out the links Jesse has on the post... I'm sure you've crossed paths. How to: Create A half square from a regular polygon (triangle) Martin Krüger demonstrates the exact placement of a half-square (isosceles right triangle), formed with a regular polygon in Blend... this is much more involved than I've made it sound... check out his post. Silverlight TV 65: 3D Graphics John Papa has Silverlight TV number 65 up and it's all about the 3D graphics stuff we saw at the Firestarter. John is talking with Danny Riddel, the CEO of Archetype, the company that built the awesome 3D demo we all gushed over. Jounce Part 12: Providing History-Based Back Navigation Jeremy Likness has part 12 of his Jounce exploration up... and discussing the stack of navigated pages that Jounce retains and providing a 'go back' functionality... and provides a good example of using it all. Prism 4 Region Navigation with Silverlight Frame Navigation and Unity Karl Shifflett has a post for all us Prism afficianados... Prism, Unity, and the Silverlight Frame Navigation framework. Some great external links for 'required reading' too. Developing a Windows Phone 7 Jump List Control Colin Eberhardt has an awesome tutorial up for creating a JumpList control for WP7... what a bunch of effort... this is a step-by-step description of designing the control he built and blogged about a while back... and it's still cool! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • "The connection has timed out" - Please help!

    - by gon
    I recently installed a fresh Ubuntu 12.04 LTS on a desktop, and the installation itself was successful (other than 'grub rescue' issue that I encountered but fixed) but this connection problem is really giving me a headache. Symptoms: 1. When I open the FireFox browser and try to connect to a website, it just hangs for a while saying "Connecting..." but eventually loads an error page "The connection has timed out". 2. It's not a browser problem (and I tried setting ipv6 thing to "true" at about:config) because running "sudo apt-get install [some-random-package]" at terminal fails ("E: Unable to locate package [package]") too. All other operations that need internet access are not working. 3. I certainly see a wired network (called "eth1") at the Network Manager, and it says "Connection Established" after disconnecting and then connecting again. I have tried almost everything that could be found from google search results still no luck. Their problems slightly differ from mine or the solutions just don't work. By the way it didn't have internet access when installing Ubuntu 12.04. (I ignored the message that I need internet to install Ubuntu) Could this be a problem? I'm sorry I don't remember if internet worked or not on the previous version of Ubuntu. :( I would really appreciate your help... I don't even know what more to do if this fails too.. Thanks!! Thanks for your comment. Here is the result of ifconfig: eth0 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b9 inet addr:10.10.65.185 Bcast:10.10.65.255 Mask:255.255.255.0 inet6 addr: fe80::7aac:c0ff:fe3d:b2b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3907 errors:0 dropped:0 overruns:0 frame:0 TX packets:771 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:393118 (393.1 KB) TX bytes:73472 (73.4 KB) Interrupt:16 eth1 Link encap:Ethernet HWaddr 78:ac:c0:3d:b2:b8 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:204 (204.0 B) TX bytes:204 (204.0 B) route -n: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.65.1 0.0.0.0 UG 0 0 0 eth0 10.10.65.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 /etc/resolv.conf: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 10.81.1.8 nameserver 10.1.2.10 nameserver 127.0.0.1 search yamatake.local /etc/network/interfaces: auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp #auto eth1 #iface eth1 inet dhcp And I'll also include the result of 'sudo lshw -C network' in case it might help: *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 10 serial: 78:ac:c0:3d:b2:b9 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 ip=10.10.65.185 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:93 memory:fc000000-fc00ffff *-network description: Ethernet interface product: NetXtreme BCM5764M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 logical name: eth1 version: 10 serial: 78:ac:c0:3d:b2:b8 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5764m-v3.35 latency=0 link=no multicast=yes port=twisted pair speed=100Mbit/s resources: irq:94 memory:fb000000-fb00ffff

    Read the article

  • FusionCharts vs GoogleCharts vs HighCharts suggestions required for commercial use

    - by Forte
    I find that FusionCharts v3 evaluation and HighCharts cannot be used for commercial purpose. Google charts is the best option but those are not as good looking as any of the above. They don't even have 3d charts although their visualization API does support 3D which i cannot find. Is there any open source graphing or charting solution available which can be used in commercial products? I even looked in to Open Flash Charts 2 but found that the developer had left the project long time a go and the currently out libs are way too buggy. I had to fix more than 50 bugs to get their 1 chart working. I tried to fix others but couldn't get Pie charts etc. working. What i'm looking for is - 1. Attractive 3d column chart. 2. 3d Pie Chart. 3. Spline Chart. 4. Geographical Chart. Does anyone knows any open source or free solution which can be used for commercial products? Cheers!

    Read the article

  • UIScrollView without paging but with touchesmoved

    - by BittenApple
    I have a UIScrollView that I use to display PDF pages in. I don't want to use paging (yet). I want to display content based on touchesmoved event (so after horizontal swipe). This works (sort of), but instead of catching a single swipe and showing 1 page, the swipe seems to gets broken into 100s of pieces and 1 swipe acts as if you're moving a slider! I have no clue what am I doing wrong. Here's the experimental "display next page" code which works on single taps: - (void)nacrtajNovuStranicu:(CGContextRef)myContext { CGContextTranslateCTM(myContext, 0.0, self.bounds.size.height); CGContextScaleCTM(myContext, 1.0, -1.0); CGContextSetRGBFillColor(myContext, 255, 255, 255, 1); CGContextFillRect(myContext, CGRectMake(0, 0, 320, 412)); size_t brojStranica = CGPDFDocumentGetNumberOfPages(pdfFajl); if(pageNumber < brojStranica){ pageNumber ++; } else{ // kraj PDF fajla, ne listaj dalje. } CGPDFPageRef page = CGPDFDocumentGetPage(pdfFajl, pageNumber); CGContextSaveGState(myContext); CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true); CGContextConcatCTM(myContext, pdfTransform); CGContextDrawPDFPage(myContext, page); CGContextRestoreGState(myContext); //osvjezi displej [self setNeedsDisplay]; } Here's the swiping code: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { [super touchesBegan:touches withEvent:event]; UITouch *touch = [touches anyObject]; gestureStartPoint = [touch locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint currentPosition = [touch locationInView:self]; CGFloat deltaX = fabsf(gestureStartPoint.x - currentPosition.x); CGFloat deltaY = fabsf(gestureStartPoint.y - currentPosition.y); if (deltaX >= kMinimumGestureLength && deltaY <= kMaximumVariance) { [self nacrtajNovuStranicu:(CGContextRef)UIGraphicsGetCurrentContext()]; } } The code sits in UIView which displays the PDF content, perhaps I should place it into UIScrollView or is the "display next page" code wrong?

    Read the article

  • Navigation view system with webview problem with touches!

    - by Gonçalo Falcão
    Hello i have search everything and i didn't figure this out! I have a tab bar controller with 5 navigation controlls, in one of the navigation control, i have a view, with a table view inside, and when i click that item i push a new view, that view have view -webview -view i create that second view(is transperant) because i need to handle a single tap to hide my toolbar and navigation bar, and the webview was eating all the touches! I put that view and implement on the view controller -(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch* touch = [touches anyObject]; if(touch.tapCount == 2){ [NSObject cancelPreviousPerformRequestsWithTarget:self]; } [[wv.subviews objectAtIndex:0] touchesBegan:touches withEvent:event]; [super touchesBegan:touches withEvent:event]; } -(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{ [[wv.subviews objectAtIndex:0] touchesMoved:touches withEvent:event]; [super touchesMoved:touches withEvent:event]; } -(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{ UITouch* touch = [touches anyObject]; if(touch.tapCount == 1){ [self performSelector:@selector(hideBars) withObject:nil afterDelay:0.3]; } [[wv.subviews objectAtIndex:0] touchesEnded:touches withEvent:event]; [super touchesEnded:touches withEvent:event]; } -(void) touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event{ [[wv.subviews objectAtIndex:0] touchesCancelled:touches withEvent:event]; [super touchesCancelled:touches withEvent:event]; } wv is my UIWebView IBOutlet now i can get the the touches in my controller and send them to my webview. So i thought everything was working, i'm able to scroll, but now when i have links i'm not able to click them. And the webview is detecting the links i have made that test. So any other way to implements this to get the touches in the links, or i should change this workaround to hide the toolbars so i can have the full functionability of the webview? Thks for the help in advance.

    Read the article

  • Add double tap action (presentModalViewController) to UISCOLLVIEW

    - by R.J.
    I have been wrestling this issue for a while now and cannot seem to get the following "touchesEnded" method to execute within a UISCROLLVIEW. I have read on many of the forums that UISCROLLVIEW will take control of all touch events unless it is subclassed, but I cannot seem to get the code right (still new to the SDK). Basically I have a scrollview made uo with several UIIMAGEVIEW's and currenlty have scrolling with paging (much like the photo app). I have been studying the SCROLLING MADDNESS example without success. All I want to do is anywhere in the UISCROLLVIEW have the user double tap to presentModalViewController back to my info page (i.e.) (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { NSSet *allTouches = [event allTouches]; switch ([allTouches count]) { case 1: {// One finger touch UITouch *touch = [[allTouches allObjects] objectAtIndex:0]; if ([touch tapCount] == 2) {InfoButtonViewController *scroll = [[InfoButtonViewController alloc] initWithNibName:nil bundle:nil]; scroll.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:scroll animated:YES]; [scroll release]; } } } } Any code assistance would be greatly appreciated. The UISCROLLVIEW is implemented as follows (let me know if I need to provide additional details). Thank you in advance... MyViewController.h @interface MyViewController : UIViewController { } @end

    Read the article

  • Problem with flash in a webbrowser in a winform

    - by fgnt
    I have the oddest problem (but aren't all programming problems odd?). I have a winform that contains a webbrowser object that opens a website that has flash on it. This winform is running on a touchscreen computer (I can't find the brand or model number). Here is what I know: flash objects embeded in a website that is accessed via the webbrowser object in my winform do not function properly said flash objects only react to the first 'click' on them. So the website opens and if I hit a button, that button works but nothing afterward works within the flash object works. If my first 'click' misses a button, nothing works there after. trying to 'click' an flash button gives the same response as just hovering over the button This isn't a problem with the touch part of the touch screen as using a mouse also gives the same not working right response this isn't a problem with the web page as I can open up explorer on the same computer and navigate the webpage just fine from there The program also works 100% right on my personal computer so it shouldn't be the program's fault if it's not the touch screen fault and not the program's fault, I can't blame anything right now. the EXACT same program worked 100% on our old touch screen (which was having other problems so we had to get rid of it). Oh, also, surfing just a 'normal' webpage in a webbrowser in the winform works just fine.

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • Developing cross platform mobile application

    - by sohilv
    More and more mobile platforms are being launched and sdk's are available to developers. There are various mobile platform are available, Android,iOS,Moblin,Windows mobile 7,RIM,symbian,bada,maemo etc. And making of corss platform application is headache for developers. I am searching common thing across the platforms which will help to developers who want to port application to all platforms.Like what is the diff screen resolution, input methods, open gl support etc. please share details that you know for the any of platform . or is there possibilities , by writing code in html (widget type of thing) and load it into native application. I know about the android , in which we can add the web view into application. by calling setContentView(view) Please share the class details where we can add the html view into native application of different type of platforms that you know. Purpose of this thread is share common details across developers. marking as community wiki. Cross platform tools & library XMLVM and iSpectrum (cross compile Java code from an Android app or creating one from scratch Phone Gap (cross platform mobile apps) Titanium (to build native mobile and desktop apps with web technologies) Mono Touch ( C# for iphone ) rhomobile - http://rhomobile.com/ samples are here: http://github.com/rhomobile/rhodes-system-api-samples Sencha Touch - Sencha Touch is a HTML5 mobile app framework that allows you to develop web apps that look and feel native on Apple iOS and Google Android touchscreen devices. http://www.sencha.com/products/touch/ Corona - Iphone/Ipad / Android application cross platform library . Too awesome. http://anscamobile.com/corona/ A guide to port existing Android app to Windows Phone 7 http://windowsphone.interoperabilitybridges.com/articles/windows-phone-7-guide-for-iphone-application-developers

    Read the article

  • why it is not possible to modify file in a directory, where i have read/write group rights

    - by sighter
    I am currently messing around on my linux system and now i have the following situation. The directory /srv/http has the following permissions set: drwxrwxr-x 2 root httpdev 80 Jun 13 11:48 ./ drwxr-xr-x 6 root root 152 Mar 26 13:56 ../ -rwxrwxr-x 1 root httpdev 8 Jun 13 11:48 index.html* I have created the group httpdev before with the command: groupadd httpdev and added my user sighter with: gpasswd -a sighter httpdev Then i have set the permissions as above using the chown and chmod commands. But now i am not allowed to modify the index.html file or create a new file, as user sighter ,with touch like that: <sighter [bassment] ~http> touch hallo.php touch: cannot touch `hallo.php': Permission denied What do I understand wrong. I was expecting that i can do what i want there then the group has all the rights. The following Output is for your in formation. <sighter [bassment] ~http> cat /etc/group | grep sighter ... httpdev:x:1000:sighter ... The used linux-distro is archlinux. Thanks for all answers :) greetz Sascha

    Read the article

  • How to add images while moving finger across UIView?

    - by Luke Irvin
    I am having issues adding an image across a view while moving finger across the screen. Currently it is adding the image multiple times but squeezing them too close together and not really following my touch. Here is what I am trying: -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch * touch = [touches anyObject]; touchPoint = [touch locationInView:imageView]; if (touchPoint.x > 0 && touchPoint.y > 0) { _aImageView = [[UIImageView alloc] initWithImage:aImage]; _aImageView.multipleTouchEnabled = YES; _aImageView.userInteractionEnabled = YES; [_aImageView setFrame:CGRectMake(touchPoint.x, touchPoint.y, 80.0, 80.0)]; [imageView addSubview:_aImageView]; [_aImageView release]; } } -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [self touchesBegan:touches withEvent:event]; } Any suggestions is much appreciated. EDIT: What I want: After taking or choosing an image, the user can then select another image from a list. I want the user to touch and move their finger across the view and the selected image will appear where they drag their finger, without overlapping, in each location.

    Read the article

  • How to display buttons after enterframe event is over in Corona?

    - by user1463542
    I am trying to display two buttons called countd_again and main_menu after my enterframe event is over. I can't see those buttons after enterframe event is over,though. Can you check my code,please? And also i want to add new addListener for the buttons using director and scene. module(..., package.seeall) function new() local localGroup = display.newGroup(); display.setStatusBar(display.HiddenStatusBar) local background = display.newImage("background.png") start=os.time() cnt=1 local countd_again = display.newImage("yeniden.png") countd_again.x=100 countd_again.y=100 countd_again.isVisible= false; countd_again.alpha=0; countd_again.scene="helloWorld"; local main_menu= display.newImage("anamenu.png") main_menu.x=100 main_menu.y=300 main_menu.isVisible=false; main_menu.alpha=0; main_menu.scene="helloWorld" -- listener function local function onEveryFrame( event ) if (cnt~=0) then cnt= 3-(os.time()-start) minute = math.floor(cnt/60) second=cnt%60 --print(minute,second) minTxt=display.newText(minute,50,50,nil,100) secTxt=display.newText(second,250,50,nil,100) transition.to(minTxt, {time=100, alpha=0}) transition.to(secTxt,{time=100, alpha=0}) else Runtime: removeEventListener("enterFrame",onEveryFrame) countd_again.isVisible=true; main_menu.isVisible=true; transition.to(countd_again,{time=500,alpha=1}); transition.to(main_menu,{time=500,alpha=1}); countd_again: addEventListener("touch", changeScene) main_menu: addEventListener("touch", changeScene) end end -- assign the above function as an "enterFrame" listener Runtime:addEventListener( "enterFrame", onEveryFrame ) function changeScene(e) if(e.phase=="ended") then director:changeScene(e.target.scene); end end countd_again: addEventListener("touch", changeScene) main_menu: addEventListener("touch", changeScene) localGroup: insert(countd_again) localGroup:insert(main_menu) localGroup:insert(background) return localGroup; end

    Read the article

  • Android - Turn off display without triggering sleep/lock screen - Turn on with Touchscreen

    - by NebulaSleuth
    I have been trying to find a way to turn off the display, and wake up from the user touching the touch screen. The device is in an embedded environment where the device is a tablet and the user does not have access to anything except the touch screen (no buttons at all). It is connected to power so the battery won't be a problem, but when I detect no activity I want to turn off the screen so it isn't staring them in the face all day and doesn't reduce the life the LCD backlight. I maintain a wakelock permanently and decide when to sleep myself. The problem is that when I turn off the screen using : WindowManager.LayoutParams params = getWindow().getAttributes(); params.screenBrightness = 0; getWindow().setAttributes(params); The activity gets paused and stopped. And the unit does not respond to a touch to wake it up. You need to press the power button. At that point the "slide to unlock" shows up. I want to turn off the display, and then stay running so I can detect a touch screen event and turn the display back on. I also tried turning the display to a brightness of 0.1, which works on some devices, but the device I need it to work on, only "dims" the display. I also tried this: // First Remove my FULL wakelock //then aquire a partial wake lock (which should turn off the display) PowerManager.WakeLock wl = manager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "Your Tag"); wl.acquire(); however this method does not turn off the display.

    Read the article

  • Square collision detection problem (iPhone).

    - by thyrgle
    Hi, I know I've probably posted three questions related to this then deleted them, but thats only because I solved them before I got an answer. But, this one I can not solve and I don't believe it is that hard compared to the others. So, with out further ado, here is my problem: So I am using Cocos2d and one of the major problem is they don't have buttons. To compensate for there lack in buttons I am trying to detect if when a touch ended did it collide with a square (the button). Here is my code: - (void)ccTouchesEnded:(NSSet*)touches withEvent:(UIEvent*)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:touch.view]; NSLog(@"%f", 240-location.y); if (isReady == YES) { if (((240-location.y) <= (240-StartButton.position.x - 100) || -(240-location.y) >= (240-StartButton.position.x) + 100) && ((160-location.x) <= (160-StartButton.position.y) - 25 || (160-location.x) >= (160-StartButton.position.y) + 25)) { NSLog(@"Coll:%f", 240-StartButton.position.x); CCScene *scene = [PlayScene node]; [[CCDirector sharedDirector] replaceScene:[CCZoomFlipAngularTransition transitionWithDuration:2.0f scene:scene orientation:kOrientationRightOver]]; } } } Do you know what I am doing wrong?

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • external monitor for smart phones

    - by Kevin Nevin
    I use a web based CRM but can view enough of the information displayed once I am logged into the CRM on the HTC Touch Pro2. Ideally I would like to hook up an external portable monitor to the Touch Pro2 to increase my functionality of this CRM.

    Read the article

  • How to use dpkg of busybox

    - by Daniel YC Lin
    I'm trying to install pkg in a limited space embedded system. I use busybox's dpkg. To let dpkg work, I just touch a file touch /var/lib/dpkg/status But, it still can not work. $dpkg -i ntpdate_4.2.4p4+dfsg-8lenny3_sh4.deb dpkg: package ntpdate depends on netbase, which is not installed or flagged to be installed How to flag the netbase as installed?

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >