Search Results

Search found 4396 results on 176 pages for 'low poly'.

Page 96/176 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Is 1GB RAM with integrated graphics sufficient for Unity 3D on 12.04?

    - by Anwar Shah
    I have been using Ubuntu since Hardy Heron (8.04). I used Natty, Oneiric with Unity. But When I recently (more than 1 month now) upgraded My Ubuntu to Precise (12.04), the performance of my laptop is not satisfactory. It is too unresponsive compared to older releases. For example, the Unity in 12.04 is very unresponsive. Sometimes, it requires 2 seconds to show up the dash (which was not the case with Natty, though people always saying that Natty's version of Unity is buggiest). I am assuming that, May be my 1GB RAM now becomes too low to run Unity of Precise. But I also think, Since Unity is improved in Precise, It may not be the case. So, I am not sure. Do you have any ideas? Will upgrading RAM fix it? How much I need if upgrade is required? Laptop model: "Lenovo 3000 Y410" Graphic : "Intel GMA X3100" on Intel 965GM Chipset. RAM/Memory : "1 GB DDR2" (1 slot empty). Swap space : 1.1GB Resolution: 1280x800 widescreen Shared RAM for Graphics: 256 MB as below output suggests $ dmesg | grep AGP [ 0.825548] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xd0000000

    Read the article

  • Oracle ACEs in the House

    - by Justin Kestelyn
    As is customary, the Oracle ACEs have invaded the Oracle Develop Conference agenda.Why? Because Oracle ACE-dom inherently is a stamp of not only expertise, but a unique ability to make that expertise useful to others. Plus, they're a group of "fine blokes" (UK. subjects, educate me: is that really a word?)Perhaps if you're not able to catch one of these sessions, you will be able to see the applicable ACE in action elsewhere, at a conference or user group meeting near you. Session ID Session Title Speaker, Company S313355 Developing Large Oracle Application Development Framework 11g Applications Andrejus Baranovskis, Red Samurai Consulting S316641 Xenogenetics for PL/SQL: Infusing with Java Best Practices and Design Patterns Lucas Jellema, AMIS; Alex Nuijten, AMIS S317171 Building Secure Multimedia Web Applications: Tips and Techniques Marcel Kratochvil, Piction; Melliyal Annamalai, Oracle S315660 Database Applications Lifecycle Management Marcelo Ochoa, Facultad de Ciencias Exactas S315689 Building a High-Performance, Low-Bandwidth Web Architecture Paul Dorsey, Dulcian, Inc. S316003 Managing the Earthquake: Surviving Major Database Architecture Changes Paul Dorsey, Dulcian, Inc.; Michael Rosenblum, Dulcian, Inc. S314869 Introduction to Java: PL/SQL Developers Take Heart Peter Koletzke, Quovera S316184 Deploying Applications to Oracle WebLogic Server Using Oracle JDeveloper Peter Koletzke, Quovera; Duncan Mills, Oracle S316597 Using Collections in Oracle Application Express: The Definitive Intro Raj Mattamal, Niantic Systems, LLC S313382 Using Oracle Database 11g Release 2 in an Oracle Application Express Environment Roel Hartman, Logica S313757 Debugging with Oracle Application Express and Oracle SQL Developer Dimitri Gielis, Sumneva S313759 Using Oracle Application Express in Big Projects with Many Developers Dimitri Gielis, Sumneva S313982 Forms2Future: The Ongoing Journey into the Future for Oracle-Based Organizations Lucas Jellema, AMIS; Peter Ebell, AMIS

    Read the article

  • A list of pros and cons to giving developers “Local Admin” privileges to their machines? [closed]

    - by Boden
    Possible Duplicate: Is local “User” rights enough or do developers need Local Administrator or Power User while coding? I currently work for a large utilities company which currently does not grant “Local Admin” access to developers. This is causing a lot of grief as anything that requires elevated privileges needs to be done by the Desktop Support/Server Teams. In some cases this can take several days and requires our developers to have to show why they need this access. I personally think that all developers should have local administration rights and are currently fighting with management to achieve this but I would like to know what other people think about this. To achieve this I would like to hear what people believe are the pros and cons of letting developers have local admin access to their machines. Here are some I have come up with: Pros Loss time is keep low as developers can resolve issues that would normally require Local Admin Evaluation of tools and software are possible to improve productivity Desktop support time not wasted installing services and software on developers PC Cons Developers install software on local PC that could be malicious to others or inappropriate in a business environment Desktop Support required to support a PC that is not the norm Development done with admin access that then fails when promoted to another environment that does not have the same access level

    Read the article

  • Making a full-screen animation on Android? Should I use OPENGL?

    - by Roger Travis
    Say I need to make several full-screen animation that would consist of about 500+ frames each, similar to the TalkingTom app ( https://play.google.com/store/apps/details?id=com.outfit7.talkingtom2free ). Animation should be playing at a reasonable speed - supposedly not less, then 20fps - and pictures should be of a reasonable quality, not overly compressed. What method do you think should I use? So far I tried: storing each frame as a compressed JPEG before animation starts, loading each frame into a byteArray as the animation plays, decode corresponding byteArray into a bitmap and draw it on a surface view. Problem - speed is too low, usually about 5-10 FPS. I have thought of two other options. turning all animations into one movie file... but I guess there might be problems with starting, pausing and seeking to the exactly right frame... what do you think? another option I thought about was using OPENGL ( while I never worked with it before ), to play animation frame by frame. What do you think, would opengl be able to handle it? Thanks!

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Guest Post: Using IronRuby and .NET to produce the &lsquo;Hello World of WPF&rsquo;

    - by Eric Nelson
    [You might want to also read other GuestPosts on my blog – or contribute one?] On the 26th and 27th of March (2010) myself and Edd Morgan of Microsoft will be popping along to the Scottish Ruby Conference. I dabble with Ruby and I am a huge fan whilst Edd is a “proper Ruby developer”. Hence I asked Edd if he was interested in creating a guest post or two for my blog on IronRuby. This is the second of those posts. If you should stumble across this post and happen to be attending the Scottish Ruby Conference, then please do keep a look out for myself and Edd. We would both love to chat about all things Ruby and IronRuby. And… we should have (if Amazon is kind) a few books on IronRuby with us at the conference which will need to find a good home. This is me and Edd and … the book: Order on Amazon: http://bit.ly/ironrubyunleashed Using IronRuby and .NET to produce the ‘Hello World of WPF’ In my previous post I introduced, to a minor extent, IronRuby. I expanded a little on the basics of by getting a Rails app up-and-running on this .NET implementation of the Ruby language — but there wasn't much to it! So now I would like to go from simply running a pre-existing project under IronRuby to developing a whole new application demonstrating the seamless interoperability between IronRuby and .NET. In particular, we'll be using WPF (Windows Presentation Foundation) — the component of the .NET Framework stack used to create rich media and graphical interfaces. Foundations of WPF To reiterate, WPF is the engine in the .NET Framework responsible for rendering rich user interfaces and other media. It's not the only collection of libraries in the framework with the power to do this — Windows Forms does the trick, too — but it is the most powerful and flexible. Put simply, WPF really excels when you need to employ eye candy. It's all about creating impact. Whether you're presenting a document, video, a data entry form, some kind of data visualisation (which I am most hopeful for, especially in terms of IronRuby - more on that later) or chaining all of the above with some flashy animations, you're likely to find that WPF gives you the most power when developing any of these for a Windows target. Let's demonstrate this with an example. I give you what I like to consider the 'hello, world' of WPF applications: the analogue clock. Today, over my lunch break, I created a WPF-based analogue clock using IronRuby... Any normal person would have just looked at their watch. - Twitter The Sample Application: Click here to see this sample in full on GitHub. Using Windows Presentation Foundation from IronRuby to create a Clock class Invoking the Clock class   Gives you The above is by no means perfect (it was a lunch break), but I think it does the job of illustrating IronRuby's interoperability with WPF using a familiar data visualisation. I'm sure you'll want to dissect the code yourself, but allow me to step through the important bits. (By the way, feel free to run this through ir first to see what actually happens). Now we're using IronRuby - unlike my previous post where we took pure Ruby code and ran it through ir, the IronRuby interpreter, to demonstrate compatibility. The main thing of note is the very distinct parallels between .NET namespaces and Ruby modules, .NET classes and Ruby classes. I guess there's not much to say about it other than at this point, you may as well be working with a purely Ruby graphics-drawing library. You're instantiating .NET objects, but you're doing it with the standard Ruby .new method you know from Ruby as Object#new — although, the root object of all your IronRuby objects isn't actually Object, it's System.Object. You're calling methods on these objects (and classes, for example in the call to System.Windows.Controls.Canvas.SetZIndex()) using the underscored, lowercase convention established for the Ruby language. The integration is so seamless. The fact that you're using a dynamic language on top of .NET's CLR is completely abstracted from you, allowing you to just build your software. A Brief Note on Events Events are a big part of developing client applications in .NET as well as under every other environment I can think of. In case you aren't aware, event-driven programming is essentially the practice of telling your code to call a particular method, or other chunk of code (a delegate) when something happens at an unpredictable time. You can never predict when a user is going to click a button, move their mouse or perform any other kind of input, so the advent of the GUI is what necessitated event-driven programming. This is where one of my favourite aspects of the Ruby language, blocks, can really help us. In traditional C#, for instance, you may subscribe to an event (assign a block of code to execute when an event occurs) in one of two ways: by passing a reference to a named method, or by providing an anonymous code block. You'd be right for seeing the parallel here with Ruby's concept of blocks, Procs and lambdas. As demonstrated at the very end of this rather basic script, we are using .NET's System.Timers.Timer to (attempt to) update the clock every second (I know it's probably not the best way of doing this, but for example's sake). Note: Diverting a little from what I said above, the ticking of a clock is very predictable, yet we still use the event our Timer throws to do this updating as one of many ways to perform that task outside of the main thread. You'll see that all that's needed to assign a block of code to be triggered on an event is to provide that block to the method of the name of the event as it is known to the CLR. This drawback to this is that it only allows the delegation of one code block to each event. You may use the add method to subscribe multiple handlers to that event - pushing that to the end of a queue. Like so: def tick puts "tick tock" end timer.elapsed.add method(:tick) timer.elapsed.add proc { puts "tick tock" } tick_handler = lambda { puts "tick tock" } timer.elapsed.add(tick_handler)   The ability to just provide a block of code as an event handler helps IronRuby towards that very important term I keep throwing around; low ceremony. Anonymous methods are, of course, available in other more conventional .NET languages such as C# and VB but, as usual, feel ever so much more elegant and natural in IronRuby. Note: Whether it's a named method or an anonymous chunk o' code, the block you delegate to the handling of an event can take arguments - commonly, a sender object and some args. Another Brief Note on Verbosity Personally, I don't mind verbose chaining of references in my code as long as it doesn't interfere with performance - as evidenced in the example above. While I love clean code, there's a certain feeling of safety that comes with the terse explicitness of long-winded addressing and the describing of objects as opposed to ambiguity (not unlike this sentence). However, when working with IronRuby, even I grow tired of typing System::Whatever::Something. Some people enjoy simply assuming namespaces and forgetting about them, regardless of the language they're using. Don't worry, IronRuby has you covered. It is completely possible to, with a call to include, bring the contents of a .NET-converted module into context of your IronRuby code - just as you would if you wanted to bring in an 'organic' Ruby module. To refactor the style of the above example, I could place the following at the top of my Clock class: class Clock include System::Windows::Shape include System::Windows::Media include System::Windows::Threading # and so on...   And by doing so, reduce calls to System::Windows::Shapes::Ellipse.new to simply Ellipse.new or references to System::Windows::Threading::DispatcherPriority.Render to a friendlier DispatcherPriority.Render. Conclusion I hope by now you can understand better how IronRuby interoperates with .NET and how you can harness the power of the .NET framework with the dynamic nature and elegant idioms of the Ruby language. The manner and parlance of Ruby that makes it a joy to work with sets of data is, of course, present in IronRuby — couple that with WPF's capability to produce great graphics quickly and easily, and I hope you can visualise the possibilities of data visualisation using these two things. Using IronRuby and WPF together to create visual representations of data and infographics is very exciting to me. Although today, with this project, we're only presenting one simple piece of information - the time - the potential is much grander. My day-to-day job is centred around software development and UI design, specifically in the realm of healthcare, and if you were to pay a visit to our office you would behold, directly above my desk, a large plasma TV with a constantly rotating, animated slideshow of charts and infographics to help members of our team do their jobs. It's an app powered by WPF which never fails to spark some conversation with visitors whose gaze has been hooked. If only it was written in IronRuby, the pleasantly low ceremony and reduced pre-processing time for my brain would have helped greatly. Edd Morgan blog Related Links: Getting PhP and Ruby working on Windows Azure and SQL Azure

    Read the article

  • Duplicate content issue after URL-change with 301-redirects

    - by David
    We got the following problem: We changed all URLs on our page from oldURL.html to newURL.html and set up 301-redirects (ca. 600 URLs) Google re-crawled our page, indexed all the new URLs (newURL.html), but didn't crawl the old URLs (oldURL.html) again, as there were no internal links pointing at those domains anymore after the URL-change. This resulted in massive ranking-drops, etc. because (i) Google thought oldURL.html has exactly the same content as newURL, causing duplicate content issues, and (ii) Google did not transfer the juice from oldURL to newURL, because the 301-redirect was never noticed. Now we reset all internal Links to the old URLs again, which then redirect to the newURLs, in the hope that Google would re-crawl the pages, once there are internal links pointing at them. This is partially happening, but at a really low speed, so it would take multiple months to notice all-redirects. I guess, because Google thinks: "Aah, I already know oldURL.html, so no need to re-crawl it. Possible solutions we thought of are ... Submitting as many of the old URLs to the index as possible via Webmaster Tools, to manually trigger a crawl. Doing that already Submitting a sitemap with all old URLs - but not sure if good idea, because Google does not seem to like 301-redirects in a sitemap ... Both solutions are not perfect - and we cannot wait for three months, just to regain our old rankings. What are your ideas? Best, David

    Read the article

  • Ubuntu eats itself after I followed updater instruction

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I (and this is from noob memory) sudo apt-get install -f. I did this and typed in the confirmation phrase and watched apt-get systematically remove every component of linux, both the stuff I installed and the core ubuntu packages. I could only assume at this stage that this was for a fresh install but of course, I know better now. There's much complaint about Windows, but I've never met with advice from Microsoft tools to wipe out the operating system because of a couple of missing .dlls. So what gives? This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. I'm tempted, but I have hopes of recovering the data that I had enough misguided faith to trust to the linux ext4 partition. I've tried pen driving back into it with a 32 bit iso but I'm simply informed that ubuntu is running in low graphics mode and get to watch the dots cycle indefinitely. EDIT: Thanks for the advice vis positive request. I've got onto the machine with a 64 bit stick and can see the file structure left behind by the installation. My first instinct was to run install from the stick but it did not seem to offer a recovery option. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings. I'm particularly worried about losing email from evolution - the rest I could probably lash back together. I would also be interested how this disaster came about. I see people in the know recommending this same procedure in similar circumstances. Thanks for your attention, Tony Martin

    Read the article

  • Debugging .NET 2.0 assembly from unmanaged code in VS2010?

    - by Rick Strahl
    I’ve run into a serious snag trying to debug a .NET 2.0 assembly that is called from unmanaged code in Visual Studio 2010. I maintain a host of components that using COM interop and custom .NET runtime hosting and ever since installing Visual Studio 2010 I’ve been utterly blocked by VS 2010’s inability to apparently debug .NET 2.0 assemblies when launching through unmanaged code. Here’s what I’m actually doing (simplified scenario to demonstrate): I have a .NET 2.0 assembly that is compiled for COM Interop Compile project with .NET 2.0 target and register for COM Interop Set a breakpoint in the .NET component in one of the class methods Instantiate the .NET component via COM interop and call method The result is that the COM call works fine but the debugger never triggers on the breakpoint. If I now take that same assembly and target it at .NET 4.0 without any other changes everything works as expected – the breakpoint set in the assembly project triggers just fine. The easy answer to this problem seems to be “Just switch to .NET 4.0” but unfortunately the application and the way the runtime is actually hosted has a few complications. Specifically the runtime hosting uses .NET 2.0 hosting and apparently the only reliable way to host the .NET 4.0 runtime is to use the new hosting APIs that are provided only with .NET 4.0 (which all by itself is lame, lame, lame as once again the promise of backwards compatibility is broken once again by .NET). So for the moment I need to continue using the .NET 2.0 hosting APIs due to application requirements. I’ve been searching high and low and experimenting back and forth, posted a few questions on the MSDN forums but haven’t gotten any hints on what might be causing the apparent failure of Visual Studio 2010 to debug my .NET 2.0 assembly properly when called from un-managed code. Incidentally debugging .NET 2.0 targeted assemblies works fine when running with a managed startup application – it seems the issue is specific to the unmanaged code starting up. My particular issue is with custom runtime hosting which at first I thought was the problem. But the same issue manifests when using COM Interop against a .NET 2.0 assembly, so the hosting is probably not the issue. Curious if anybody has any ideas on what could be causing the lack of debugging in this scenario?© Rick Strahl, West Wind Technologies, 2005-2010

    Read the article

  • Wacom Bamboo CTH460L issues

    - by Robert Smith
    I recently bought a Wacom Bamboo Pen & Touch CTH460L. I installed doctormo's PPA, however, the pen functionality didn't work and the touch was very glitchy (when I touched it, it immediately double clicked and began to drag elements in the screen). I tried to configure it using the wacom-utility package in the Synaptic Package Manager (version 1.21-1) but that didn't work either. Then I followed this post (#621, written by aaaalex), and after some problems trying to restart Ubuntu (graphics related problems), the pen works fine (it could be better, though) but the touch functionality doesn't work anymore. Currently I have installed xserver-xorg-input-wacom (1:0.10.11-0ubuntu7), wacom-dkms (0.8.10.2-1ubuntu1) and wacom-utility. The Wacom Utility only displays an "options" field under "Wacom BambooPT 2FG 4X5" but no other option to configure it. What is the correct way to get this tablet working on Ubuntu 10.04?. By the way, currently I can't start Ubuntu properly when the tablet is connected (in that case, Ubuntu start in low graphics mode). I need to connect it later. UPDATE: I uninstalled xserver -xorg-input-wacom, and wacom-utility because one of them prevented Ubuntu to start normally. I only re-installed wacom-dkms 0.8.10.2-1ubuntu1. The pen is working but no touch functionality. The side buttons don't work either. Thanks in advance.

    Read the article

  • Advice on selecting programming languages to concentrate on? (2nd year IT security student)

    - by Tyler J Fisher
    I'm in the process of considering which programming languages I should devote the majority of my coding studies to. I'm a 2nd year CS student, majoring in IT security. What I want to do/work with: Intelligence gathering Relational databases Virus design Snort network IPS Current coding experience (what I'm going to keep): Java - intermediate HTML5 - intermediate SQL (MySQL, Oracle 11g) - basic BASH - basic I'm going to need to learn (at least) one of the following languages in order to be successful in my field. Languages to add (at least 1): Ruby (+Metasploit) C++ (virus design, low-level driver interaction, computationally intensive applications) Python (import ALL the things) My dilemma: If I diversify too broadly, I won't be able to focus on, and improve in a specific niche. Does anyone have any advice as to how I should select a language? What I'm considering + why I'm leaning towards Ruby because of Metasploit support, despite lower efficiency when compared to Python. Any suggestions based on real-world experience? Should I focus on Ruby, Python, or C++? Both Ruby, and Python have been regarded as syntactically similar to Java which my degree is based around. I'm going to be studying C++ in two years as a component of my malicious code class. Thanks, Tyler

    Read the article

  • Java Champion Jorge Vargas on Extreme Programming, Geolocalization, and Latin American Programmers

    - by Janice J. Heiss
    In a new interview, up on otn/java, titled “An Interview with Java Champion Jorge Vargas,” Jorge Vargas, a leading Mexican developer, discusses the process of introducing companies to Enterprise JavaBeans through the application of Extreme Programming. Among other things, he gives workshops about building code with agile techniques and creates a master project to build all apps based on Scrum, XP methods and Kanban. He focuses on building core components such as security, login, and menus. Vargas remarks, “This may sound easy, but it’s not—the process takes months and hundreds of hours, but it can be controlled, and with small iterations, we can translate customer requirements and problems of legacy systems to the new system.” In regard to his work with geolocalization, he says: “We have launched a beta program of Yumbling, a geolocalization-based app, with mobile clients for BlackBerry, iPhone, Android, and Nokia, with a Web interface. The first challenge was to design a simple universal mechanism providing information to all clients and to minimize maintenance provision to them. I try not to generalize a lot—to avoid low performance or misunderstanding in processing data. We use the latest Java EE technology—during the last five years, I’ve taught people how to use Java EE efficiently.” Check out the interview here.

    Read the article

  • Does the "security" repository provides anything not found in the "updates" repository?

    - by netvope
    For the limited number of package I looked at (e.g. apache), I found that the package version in the updates repository is always newer than or equal to the version available in the security repository (provided that they exist). This gives me the impression that all security patches posted to the security repository are also posted to the updates repository. If this is true, I can remove all <release_name>-security entries in my apt sources.list and the <release_name>-updates entries will still give me the security patches. This will speed up apt-get update quite a bit. The best documentation I can found regarding the repositories is on the community help page "Important Security Updates (raring-security)". Patches for security vulnerabilities in Ubuntu packages. They are managed by the Ubuntu Security Team and are designed to change the behavior of the package as little as possible -- in fact, the minimum required to resolve the security problem. As a result, they tend to be very low-risk to apply and all users are urged to apply security updates. "Recommended Updates (raring-updates)". Updates for serious bugs in Ubuntu packaging that do not affect the security of the system. However, it does not mention whether the updates repository also includes everything in the security repository. Can anyone confirm (or disconfirm) this?

    Read the article

  • TDWI World Conference Features Oracle and Big Data

    - by Mandy Ho
    Oracle is a Gold Sponsor at this year's TDWI World Conference Series, held at the Manchester Grand Hyatt in San Diego, California - July 31 to Aug 1. The theme of this event is Big Data Tipping Point: BI Strategies in the Era of Big Data. The conference features an educational look at how data is now being generated so quickly that organizations across all industries need new technologies to stay ahead - to understand customer behavior, detect fraud, improve processes and accelerate performance. Attendees will hear how the internet, social media and streaming data are fundamentally changing business intelligence and data warehousing. Big data is reaching critical mass - the tipping point. Oracle will be conducting the following Evening Workshop. To reserve your space, call 1.800.820.5592 ext 10775. Title...:    Integrating Big Data into Your Data Center (or A Big Data Reference Architecture) Date.:    Wed., August 1, 2012, at 7:00 p.m Venue:: Manchester Grand Hyatt, San Diego, Room Weblogs, Social Media, smart meters, senors and other devices generate high volumes of low density information that isn't readily accessible in enterprise data warehouses and business intelligence applications today. But, this data can have relevant business value, especially when analyzed alongside traditional information sources. In this session, we will outline a reference architecture for big data that will help you maximize the value of your big data implementation. You will learn: The key technologies in a big architecture, and their specific use case The integration point of the various technologies and how they fit into your existing IT environment How effectively leverage analytical sandboxes for data discovery and agile development of data driven solutions   At the end of this session you will understand the reference architecture and have the tools to implement this architecture at your company. Presenter: Jean-Pierre Dijcks, Senior Principal Product Manager Don't miss our booth and the chance to meet with our Big data experts on the exhibition floor at booth #306. 

    Read the article

  • How much Ruby should I learn before moving to Rails?

    - by Kevin
    Just a quick question.. I can never get a definitive answer when googling this, either. Some people say you can learn Rails without knowing any Ruby, but at some point you'll run into a brick wall and wish you knew Ruby and will have to go back to learn it..and some say to learn the "basics" of Ruby before learning Rails and it will make your life that much easier.. My current knowledge is low. I'm not a beginner, but I'm not pro, either. I went through the Learn Python The Hard Way online book in about a month, but I stopped once I got to the OOP side of Python (I know booleans, elif/if/else/statements, for loops, while loops, functions) I agree with learning the "basics" of Ruby before learning Rails, but what exactly are the "basics" of Ruby? Would I need to learn the whole OOP side of Ruby before I went on to Rails? Or would I just need to learn the Ruby syntax up to where I learned Python (booleans, elif/if/else/statements, for loops, while loops, functions) before I went on to Rails? Thanks!

    Read the article

  • How to re-configure graphics from Intel integrated to Intel / ATI switchable?

    - by Bucic
    There are lots of 'how to get switchable graphics to work' guides but I found none on how to configure a system for switchable graphics operation on Ubuntu from the ground up, nor explaining the current driver situation for particular computer models (integrated+discrete combinations). Example: https://help.ubuntu.com/community/HybridGraphics My system being mature and on Intel integrated card also makes things complicated. System information: Ubuntu 12.04 amd64, installed clean with system configured to use only the integrated Intel card Lenovo Thinkpad T500 Intel GMA 4500MHD / ATI Mobility Radeon HD 3650 Current situation: Mature and up-to-date system with no configuration changes to what's given above. I've made a backup image of the system (Clonezilla) so regardless of what's written below let's assume it's our starting point. If something in What I have already tried is not clear you may as well diregard it. What I have already tried: Configuring BIOS to switchable graphics and: Installing Additional Hardware drivers - returned an error. Installing proprietary amd-driver-installer-12.6-legacy-x86.x86_64.run automatically - system starts to 'low-graphics mode'. Tried fixing as per https://help.ubuntu.com/community/BinaryDriverHowto/ATI#Manually_installing_Catalyst_12.6.2C_special_case_for_Intel.2BAC8-ATI_hybrid_graphics Got lost, gave up. Please note that while configuring BIOS for integrated graphics only is pretty straightforward, configuring for switchable graphics is not.

    Read the article

  • Do you think natively compiled languages have reached their EOL?

    - by Yuval A
    If we look at the major programming languages in use today it is pretty noticeable that the vast majority of them are, in fact, interpreted. Looking at the largest piece of the pie we have Java and C# which are both enterprise-ready, heavy-duty, serious programming languages which are basically compiled to byte-code only to be interpreted by their respective VMs (the JVM and the CLR). If we look at scripting languages, we have Perl, Python, Ruby and Lua which are all interpreted (either from code or from bytecode - and yes, it should be noted that they are absolutely not the same). Looking at compiled languages we have C which is nowadays used in embedded and low-level, real-time environments, and C++ which is still alive and kicking, when you want to get down to serious programming as close to the hardware as you can, but still have some nice abstractions to help you with day to day tasks. Basically, there is no real runner-up compiled language in the distance. Do you feel that languages which are natively compiled to executable, binary code are a thing of the past, taken over by interpreted languages which are much more portable and compatible? Does C++ mark an end of an era? Why don't we see any new compiled languages anymore? I think I should clarify: I do not want this to turn into a "which language is better" discussion, because that is not the issue at hand. The languages I gave as example are only examples. Please focus on the question I raised, and if you disagree with my statement that compiled languages are less frequent these days, that is totally fine, I am more than happy to be proved mistaken.

    Read the article

  • Scaling background without scaling foreground in platformer?

    - by David Xu
    I'm currently developing a platform game and I've run into a problem with scaling resolutions. I want a different resolution of the game to still display the foreground unscaled (characters, tiles, etc) but I want the background to be scaled to fit into the window. To explain this better, my viewport has 4 variables: (x, y, width, height) where x and y are the top left corner and width and height are the dimensions. These can be either 800x600, 1024x768 or 1280x960. When I design my levels, I design everything for the highest resolution (1280x960) and expect the game engine to scale it down if a user is running in a lower resolution. I have tried the following to make it work but nothing I've come up with solves it so far: scale = view->width/1280; drawX = x * scale; drawY = y * scale; (this makes the translation too small for low resolution) and scale = view->width/1280; bgWidth = background->width*scale; bgHeight = background->height*scale; drawX = x + background->width/2 - bgWidth/2; drawY = y + background->height/2 - bgHeight/2; (this makes the translation completely wrong at the edges of the map) The thing is, no matter what resolution the game is run at, the map remains the same size, and the foreground is unscaled. (With a lower resolution you just see less of the foreground in the viewport) I was wondering if anyone had any idea how to solve this problem? Thank you in advance!

    Read the article

  • DNS and VPN issues

    - by Lewis
    I recently purchased a year contract for a KVM 512MB VPS running Ubuntu 11.04. I'm having some issues setting up some things on it though - two in particular that I just can't for the life of me figure out. Okay, so I'm trying to setup pptpd as my VPN for my iPhone and my Mac when I'm out on wireless networks. I'm able to login and the chap authenticates but that's as far as I get, no domains will resolve and end up loading forever, I uncommented ms-dns lines as someone had recommended to me and changed the DNS servers to Googles public ones with no luck, is there something I'm missing? (It's probably staring me in the face.) My second issue is that I have managed to setup LAMP but am having a problem with my domain, I have pointed the DNS at 123-reg to my VPS's IP and the 'www .' resolves properly, but when I try to go to the domain without the 'www .' I get the apache landing page ("The web server software is running but no content has been added, yet.") I'm pretty sure there's something I've gotta configure in Apache for the virtual host but I'm missing it. Apart from these minor set-backs I'm enjoying the low-level configuration options of having a VPS and love managing my own server. Thanks!

    Read the article

  • Is my current employer expecting too much?

    - by priyank patel
    This is my first job as a programmer.I am working on ASP.NET/C#,HTML,CSS,Javascript/Jquery. I am working for a firm which develops software for small banking firms. Currently they have their software running in 100 firms.Their software is developed in Visual Fox Pro. I was hired to develop online version of this software.I am the solo developer. My boss is another developer.So my company has two developers. My boss doesnot have any idea about .NET development.I am working on their project since 8 months.The progress is surely there but not very big. I try my best to do what my boss asks.But the project just seems too ambitious for me. The company doesnot have any planning for the project.They just ask me to develop what their older software provides.So I have to deal with front end , back end,review codes , design architecture and etc. I have decided to give my best.I try a lot.But the project sometimes just seems to be overwhelming. So my questions is , is it normal for a programmer to be in this place. I always feel the need to work in atleast a small team if not big one. Are my employers just expecting too much of a fresher.Or is that I being a programmer am lacking the skills to deal with this. I am just not able judge my condition.Also I am paid very low salary.I do work on saturday as well. Can anyone just help me judge this scenario? Any suggestions are welcome.

    Read the article

  • Oracle to Join OECD Urban Roundtable for Mayors and Ministers

    - by caroline.yu
    Oracle is pleased to announce that Bastian Fischer, vice president and general manager for EMEA, Oracle Utilities, will participate in the 2010 Organisation for Economic Co-Operation and Development (OECD) Urban Roundtable for Mayors and Ministers on 25 May in France. The roundtable, hosted by OECD Secretary General Angel Gurría, will help determine how cities can contribute to green growth incentives and address the challenges to success. The OECD is developing a global Green Growth Strategy that will identify policies and approaches that can shift production and consumption towards a clean, low-carbon and sustainable economy. Already, more than 500 European cities have signed up to the 2020 carbon pledge to reduce carbon emissions by 20 per cent in ten years. This initiative is driving the adoption of innovative technologies such as the smart gird, which deliver substantial benefits to support this mission by allowing utilities to manage their distribution grids more efficiently, reducing emissions and lowering the risk of outages. A successful smart grid infrastructure will allow green cities to manage their energy usage and succeed in their pledge to meet European targets for carbon reduction, which will undoubtedly be a discussion topic at the roundtable. For more information, visit the OECD Web site.

    Read the article

  • Ubuntu 12.04 LTS loops the login screen unless you login as Guest

    - by Mário Silva
    I am running a VMWare Player with a Ubuntu 12.04 LTS Precise Pangloin as Guest on my Windows 7. Sometimes I get the shutdown blue screen error in Windows, this time it happened when I was running the Player. When I restarted everything Ubuntu gave the not so unfamiliar in this forum Login Loop in adminstrator login. I login and there's this black screen where I can only read: "piix4...smbus:0.0.0.07.3 Host Smbus controller not enabled" . When I go to the Prompt in root mode it fails to update and only upgraded, specially some plugins ( I think graphic plugins) which also appear in one an error message after quitting the prompt, but they´are successfully installed. They are not the error message. After that I have been working with the Fail/safe Mode recovery panel. When I try to update via Root I get errors like this: W:failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/release.gpg could not resolve 'extras/ubuntu.com There are 8 more like this referring to areas like: -archive/canonical.com -ppa.Launchpad.net -security.Ubuntu.com -Us.archive.ubuntu.com - release.gpg precise-updates/release.gpg precise_backport/release.gpg Final Message: some index files failed to download.....they have been ignored or old files are used. The black screens most of the time pass by too fast for me to pick up any information. But in general I think I have done everything I was able to in the recovery panel including updating network and graphic packages and recovering filesystem packages and the basic stuff ( I am a beginner regarding Linux ) in the root prompt. Now I am stuck in this screen with graphic options: - Run in low-graphics mode just for one session - Reconfigure Graphics - Troubleshoot the error - Exit to console login I am trying to choose to reconfigure graphics but the mouse disappears in the virtual machine screen and sometimes when options change ity´s only the first and last option. ut this happens from the blue without messages. This particular option menu is in the regular GUI style against a black screen in Terminal style. Really strange. Thanx in advance, all is welcome and appreciated.

    Read the article

  • Problems with both LightDM and GDM using DisplayLink USB monitor

    - by Austin
    When I use LightDM, it will auto-login to desktop just fine. The only problem is Compiz doesn't work, and menus don't work. I can't right-click the desktop, and I can't select program menus in the top bar (I.e clicking "File" does nothing). When I use GDM, I only get a blank blue screen and the mouse cursor. I can't Ctrl+Alt+Backspace to restart, but I can Ctrl+Alt+F1 and Ctrl+Alt+F7 to switch modes. I don't think it's auto-logging me in, but I'm not sure. It plays the login screen noise. Will update with more information when I get home! EDIT: Okay, so I did a fresh install, just to ensure I hadn't borked something playing in the console. I reconfigured my setup as I did before, with the same results. Here's what I followed. The only difference is that instead of setting "vga=normal nomodeset" I set "GRUB_GFXPAYLOAD_LINUX = text". Also I only have the DisplayLink monitor configured in my xorg.conf file. At this point I'm using the open radeon driver, although I used the proprietary ati driver before. I'm not sure if I'm having a problem with: - X configuration - Graphics driver - DisplayLink driver - Unity - LightDM - Compiz - Or something else The resolution of the monitor is 800x480, 16bit. I tried setting a larger virtual resolution of 1200x720 (because the real resolution is lower than the recommended resolution), but it causes Ubuntu to boot into low graphics mode. When I get home I'm going to install the fglrx driver and see if it enables virtual resolutions, which may further enable my window manager to function properly.

    Read the article

  • What are the boundaries between the responsibilities of a web designer and a web developer?

    - by Beofett
    I have been hired to do functional development for several web site redesigns. The company I work for has a relatively low technical level, and the previous development of the web sites were completed by a graphic designer who is self taught as far as web development is concerned. My responsibilities have extended beyond basic development, as I have been also tasked with creating the development environment, and migrating hosting from external CMS hosting to internal servers incorporating scripting languages (I opted for PHP/MySQL). I am working with the graphic designer, and he is responsible for the creative design of the web. We are running into a bit of friction over confusion between the boundaries of our respective tasks. For example, we had some differences of opinion on navigation. I was primarily concerned with ease-of-use (the majority of our userbase are not particularly web-savvy), as well as meeting W3 WAI standards (many of our users are older, and we have a higher than average proportion of users with visual impairment). His sole concern was what looked best for the website, and I felt that the direction he was pushing for caused some functional problems. I feel color choices, images, fonts, etc. are clearly his responsibility, and my expectation was that he would simply provide me with the CSS pages and style classes and IDs to use, but some elements of page layout also seem to fall more under the realm of "usability", which to me translates as near-synonymous with "functionality". I've been tasked with selecting the tools we'll use, which include frameworks, scripting languages, database design, and some open source applications (Moodle for example, and quite probably Drupal in the future). While these tools are quite customizable, working directly with some of the interfaces is beyond his familiarity with CSS, HTML, and PHP. This limits how much direct control he has over the appearance, which has lead to some discussion about the tool choices. Is there a generally accepted dividing line between the roles of a web designer and a web developer? Does his relatively inexperienced background in web technologies influence that dividing line?

    Read the article

  • How do I stop someone using my domain for Spam emails?

    - by Vizioz Limited
    Hi Blog Readers,Every now and then I seem to have one of those low life Viagra sellers using my domain for spam emailing people.I have done everything I can think of to try and prevent then from doing this, but they seem to keep doing it. I just wondered if anyone out there new of a way to stop them?The headers from one of the bounce backs look like this:Return-Path: <[email protected]>Received: from rctp.telecomitalia.it (host49-133-dynamic.52-82-r.retail.telecomitalia.it[82.52.133.49]) by mx.google.com with SMTP id o8si307731weq.161.2010.07.23.05.33.53; Fri, 23 Jul 2010 05:33:59 -0700 (PDT)Received-SPF: fail (google.com: domain of [email protected] does not designate 82.52.133.49 as permitted sender) client-ip=82.52.133.49;Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate 82.52.133.49 as permitted sender) [email protected]: <[email protected]>Date: Fri, 23 Jul 2010 14:33:52 +0200From: Garnett Mckinnie <[email protected]>MIME-Version: 1.0To: NAME REMOVE <[email protected]>Subject: Where we are well established we areAs you can see from the headers, I have setup the SPF record and it is receiving a "hardfail"We are using Google Apps for our email hosting, so you'd kinda hope that they have got things pretty much secured down, so what I am missing? Or is it always going to be possible for other people to fake sending emails using another domain?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >