Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 96/184 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Change from Bullet/OgreAnim to Havok?

    - by Brett Powell
    I have been working on a game in Ogre for the last 6 months or so. It started as a learning project, and after a few rewrites it actually turned into a real game project. Physics scared me, and using Bullet with its lack of documentation was a nightmare, but I was able to atleast get some basics added and learned a lot. So as of now I am using Ogre with its default animation system (fairly basic) and Bullet Physics. I had always wanted to use Havok when I started out, but due to lack of integration information on the Ogre forums, and lack of tutorials on the net, I decided against it. Now that I am actually at the point where Bullet is just too much of a headache to proceed with (staring at forum threads praying someone answers) and the Ogre animation system is so basic, I am considering switching to Havok for Physics and Animation. The Physics system looks extremely polished and easy to use. The animation system looks incredible with the retargeting/blending/etc. The documentation is incredibly detailed as well (I guess when you come from Bullet, any documentation looks amazing) So my question is, as I am still somewhat of a 'newbie' to game development, should I just stick with what I am using now or should I make the switch over to Havok? The physics looks like I could get my project back to where it is now with minimal effort, and be able to expand much faster. The animation aspect looks extremely daunting as far as integrating it with Ogre.

    Read the article

  • IASA ITARC &ndash; Denver May 6th

    - by Jeff Certain
    The Denver chapter of the International Association of Software Architects (IASA) is holding an IT Architect Regional Conference (ITARC) in Denver on May 6th. The speaker list for this conference is amazing. Paul Rayner, Dave McComb, Randy Kahle, Peter Provost, Randy Stafford, George Fairbanks – all great speakers, and from Colorado. Brandon Satrom (who also happens to be the president of the IASA Austin chapter) will also be speaking, as will some other heavy hitters (for example, Ted Farrell, Chief Architect and Senior VP of Oracle). This is an amazing line-up, and the conference is quite reasonably priced ($150 for IASA members until April 10th, including a catered lunch). I also have the privilege of being a presenter at this conference. If you’ve ever heard any of the previously named speakers, you know that they set the bar quite high. Sounds like I’m going to have to step up my game. What I get to talk about is really cool stuff. The company I work for – Colorado CustomWare – brought me on board nearly two years ago. To say there was some technical debt is somewhat… understated. Equally understated would be that management is committed to doing the right thing. Over the past two years, we’ve done significant architectural refactoring – including an effort that took the entire team offline for most of a month. We’ve reduced the application size by 50% without losing functionality. As you can imagine, this has reduced the complexity of the application, making development faster and less prone to bugs. We’ve made many other changes – moving to an agile process, training developers, moving towards a more OO architecture. The changes we’ve made reveal, in some ways, just how far afield we were.. and there are still more changes to be made. Amazingly enough, our leadership team is eager for me to share these experiences with other architects. I’m really looking forward to being able to do so.

    Read the article

  • Is there ever a reason to use C++ in a Mac-only application?

    - by Emil Eriksson
    Is there ever a reason to use C++ in a Mac-only application? I not talking about integrating external libraries which are C++, what I mean is using C++ because of any advantages in a particular application. While the UI code must be written in Obj-C, what about logic code? Because of the dynamic nature of Objective-C, C++ method calls tend to be ever so slightly faster but does this have any effect in any imaginable real life scenario? For example, would it make sense to use C++ over Objective-C for simulating large particle systems where some methods would need to be called over and over in short time? I can also see some cases where C++ has a more appropriate "feel". For example when doing graphics, it's nice to have vector and matrix types with appropriate operator overloads and methods. This, to me, seems like it would be a bit clunkier to implement in Objective-C. Also, Objective-C objects can never be treated plain old data structures in the same manner as C++ types since Objective-C objects always have an isa-pointer. Wouldn't it make sense to use C++ instead in something like this? Does anyone have a real life example of a situation where C++ was chosen for some parts of an application? Does Apple use any C++ except for the kernel? (I don't want to start a flame war here, both languages have their merits and I use both equally though in different applications.)

    Read the article

  • Will Your Brand Survive the Age of Digital Darwinism?

    - by Christie Flanagan
    It’s the end of business as usual.  Trends such as the mobile web, social media, gamification and real-time are forcing businesses to rethink the way they operate.  At the same time, people are embracing a new digital culture across ever expanding networks.  Together, these trends have given rise to a new breed of connected consumer, one that is ready to shake the very foundation of business today.  This is the age of Digital Darwinism – where society and technology evolve faster than your ability to adapt.  How well poised is your brand to survive and thrive in this new environment? Attend this webcast to hear Altimeter Group digital analyst and futurist, Brian Solis, discuss the rise of connected consumerism and learn how brands can survive Digital Darwinism by better understanding customer expectations, disruptive technology, and the new opportunities that arise from them. You will learn: How brands are being redefined in the digital consumer landscape and what they can do to create and steer these experiences Why consumer influence is growing and how businesses can use this to their advantage How to connect with a rising audience through new touchpoints between consumers, brands, and influencers Why you need to create a culture of change to earn trust, influence and significance among today’s connected customers Register now.

    Read the article

  • How to find the Fastest DNS servers to host our domain?

    - by Denis Volovik
    The question was born because lately we've seen a pretty odd (well, at least for us, for the first time) - error message in Google webmaster tools - "DNS lookup timeout" ... I was pretty sure that with eNom's 5 DNS servers (dns1... to dns5.name-services.com) we're pretty set... But it appears that from (Europe/Hungary), for example - dns1.name-services.com takes 170ms. to respond on a ping... while GoDaddy's ns75.domaincontrol.com - takes only 40 ms. to respond... and at the same time - dns2 to dns5.name-services.com - each result with a timeout error (on ping)... This issue came to our attention right in the final stages of optimizing our web-site (almost to death) - basically, just in time... I would love to move our domains to a fast (fastest?) and reliable DNS server.. - but how do I find one ? Also - I did the ping tests from various geographic locations (we have servers in many countries) and GoDaddy seemed to be faster than eNom almost in every case. I'd be very thankful for any hints on this! Edited: Well.. maybe this one does not have an answer, after all...

    Read the article

  • Are programmers getting lazier and less competent

    - by Skeith
    I started programming in C++ at uni and loved it. In the next term we changed to VB6 and I hated it. I could not tell what was going on, you drag a button to a form and the ide writes the code for you. While I hated the way VB functioned I cannot argue that it was faster and easier than doing the same thing in C++ so i can see why it is a popular language. Now I am not calling VB developers lazy in just saying it easier than C++ and I have noticed that a lot of newer languages are following this trend such a C#. This leads me to think that as more business want quick results more people will program like this and sooner or later there will be no such thing as what we call programming now. Future programmers will tell the computer what they want and the compiler will write the program for them like in star trek. Is this just an under informed opinion of a junior programmer or are programmers getting lazier and less competent in general? EDIT: A lot of answers say why re invent the wheel and I agree with this but when there are wheels available people are not bothering to learn how to make the wheel. I can google how to do pretty much anything in any language and half the languages do so much for you when it come to debugging they have no idea what there code does of how to fix the error. That's how I cam up with the theory that programmers are becoming lazier and less competent as no one cares how stuff works just that it does until it does not.

    Read the article

  • How to Fake Back and Forward Buttons With a Three-button Mouse

    - by Erez Zukerman
    If you’re stuck using a three-button mouse, this doesn’t mean you have to give up on the comfort of Back and Forward browser buttons. Using a simple AutoHotkey script, you could set your mouse up so that you can hold the right mouse button and scroll the wheel to emulate these all-important buttons. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • Should I ditch a creative pet project in lieu of one that would demonstrate skills more applicable to an employer?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in english), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. Oh, and in case anyone is interested, my repository is here: www.github.com/hmsimha/vimagine

    Read the article

  • Learning C, C++ and C#

    - by Zac
    I'm sure you guys are tired of this question but after wading through hours of similar posts and questions I've really not made any progress to my specific concerns. I was hoping you guys could shed some light on a couple of questions I have before I decide on a course of action. BACKGROUND: I'm wanting to enroll in some type of program to learn a programming language/get a certificate/degree to work in the field. I've always been interested and bought a book on VB back in high school and dabbled. Now I want to get serious after a huge hiatus. Question 1: I've read it's counter-productive to learn C first, then C++ or C# because you develop bad habits. In a lot of college courses I've looked at, learning C/C++ is mandatory to advance. Should I ever bother learning C? On a related note, I really don't understand the difference between C and C++, or C# for the matter other than it incorporates .NET (which, I understand, is a compilation of tools and libraries that make programming easier and faster). Question 2: Where did you guys learn to program? Where do you recommend? Is it possible to land a job programming being self-taught? Is my best chance an ITT tech or a regular college? I was going to enroll in a JC and go from there but I can't decide what to do. LAST question :) I heard C++ is being "ported" to .NET. True? And if so, is this going to make C++ a solid, in-demand language to learn? Thanks for looking. :)

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Speaking at SPTechCon Boston 2012

    - by Brian Jackett
    I will be speaking at SPTechCon Boston 2012.  This will be my 3rd time speaking at SPTechCon and 4th time attending.  The conference has steadily been growing over the past few years and is one of the biggest non-Microsoft run conferences for SharePoint in the US.  I’ll be presenting two topics which I have given before but this time around with some updated content.  Registration is currently open and you can save $200 (on top of the current early bird discount of $400) by using the code "JACKETT” during registration.  I highly recommend joining for valuable learning and networking.   Where: SPTechCon Boston 2012 Title: PowerShell for the SharePoint 2010 Developer Audience and Level: Developer, Intermediate Abstract: PowerShell is not just for SharePoint 2010 administrators. Developers also get access to a wide range of functionality with PowerShell. In this session we will dive into using PowerShell with the .Net framework, web services, and native SharePoint commandlets. We will also cover some of the more intermediate to advanced techniques available within PowerShell that will improve your work efficiency. Not only will you learn how to automate your work but also learn ways to prototype solutions faster. This session is targeted to developers and assumes a basic familiarity with PowerShell. Slides and Code download: coming soon   Title: Integrating Line-of-Business Applications with SharePoint 2010 Audience and Level: Developer, Intermediate Abstract: One of the biggest value-adding enhancements in SharePoint 2010 is the Business Connectivity Services (BCS). In this session, we will overview the BCS, demonstrate connecting line-of-business applications and external systems to SharePoint through external content types, and walk through surfacing that data with external lists. This session is targeted at developers. No prior experience with the BCS is required, but a basic understanding of SharePoint Designer 2010 and SharePoint solutions is suggested. Slides and Code download: coming soon         -Frog Out

    Read the article

  • Save Files Directly from Your Browser to the Cloud in Chrome and Iron

    - by Asian Angel
    Are you looking for a quicker, easier way to upload files you find while browsing to your favorite cloud services? Skip saving files to your hard-drive and transfer them directly from your browser to your accounts using the Cloud Save extension. You can see the cloud services currently supported in the screenshot above and more are being added all the time. So if your favorite is not listed yet just keep checking in at the extension’s homepage. Cloud Save [Google Chrome Extensions] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks

    Read the article

  • Two Cloudy Observations from Oracle OpenWorld

    - by Gene Eun
    Now that the dust has settled from another amazing Oracle OpenWorld, I wanted to reflect back on a couple of key observations I made during the event. First, it was pretty clear that Cloud was again a big deal at this year's conference. It was hard to not notice that Oracle continues to be "all-in" with respect to cloud computing. Just to give you an idea of the emphasis on Cloud, there were over 300 Cloud-related sessions at this year's OpenWorld. If you caught some of the demo booths in the Oracle Red Lounge, then you saw some of the great platform, application, and social services that are now part of Oracle Cloud, as well as numerous demos of private cloud products that Oracle offers. Second, during Thomas Kurian's keynote presentation on Oracle Cloud, he announced the Preview Availability of a new service called Oracle Developer Cloud Service. This new platform service will provide developers with instant access to environments to better manage the application development lifecycle in the cloud. It provides development project teams access to favorite tools like Hudson, Git, Github, wikis, and tasks to help make innovation faster, more collaborative, and more effective. There's also integration with IDEs like Eclipse, NetBeans, and JDeveloper. If you're a developer, it's an awesome addition to Oracle Cloud's platform services! Want more details about Oracle Developer Cloud Service? Click here.

    Read the article

  • Startup/Shutdown time in Xubuntu is increasing!

    - by Ankit
    I am a novice Xubuntu user on a dual-boot machine. The other OS I have is Windows 7. When I first began using Xubuntu, I had really fast startup and shutdown (much much faster than Windows 7 :) ). However, as I started using it more and more for my work, these times started rising. I do not have any problems with execution speed of running applications. My main concern is the shutdown time. Now it has gone above Windows shutdown time [startup time has only partially increase compared to shutdown]. I checked some similar questions like this. However, they seem to not answer my concern as I feel that the concerned users there experience a long wait before the screen goes blue. In my case, the screen goes blue (desktop session ends a blue screen with a moving slider appears) pretty fast. However, it remains blue for a long time. Another answer that I saw on google was to use dmesg and then stopping some services that I do not want. However, me being a novice could not completely understand what it meant

    Read the article

  • difference between Casini [IIS express] and VS Development server or Expression web

    - by anirudha
    MVC3 project can be run within Expression web and Visual studio as opened like a website not a project. they work same even if you open blogengine.net project in VS they take a time when you have more theme but expression web debug them in a second. well because theme design not matter for code. Expression web is a good because they save time for compile the code. even changes we make a little in design nothing in backend code.   i found a little difference between Casini and VS development server that if image putted in wrong way like <img src=”//img.png”/> instead of <img src=”/img.png”/> the error we make // instead of / that’s not worked in Expression web or Visual studio debugging but in Cassini it’s work fine.   Well i found that debug Blogengine.net in Expression web is a great thing because in VS they take a time like a minute to debug when you trying to debug first time. Expression Web save a time when we design themes within them and that’s much good option because web is also maked for design.   Well if you want to debug application faster then use casini but Expression web debugging is a good option when they take a long time to debug in Visual studio and EW debug them in a seconds.

    Read the article

  • Does Unity have existing support for Timelines?

    - by Raven Dreamer
    I am planning the development of a game in Unity3D, and trying to come to terms with what the engine has already provided, and what I must code myself. The game itself is going to be a rhythm game, which means synching audio and graphical events so that they always play when they're supposed to. What I'm looking to avoid is a potential scenario of lag where either the audio or the graphics starts to progress faster than the other. When we discussed this type of coordinating system in my game design class back at university, my professor called this type of design a "Timeline" class. The idea being that you can instantiate one or more of these to progress at different rates, schedule things to happen in the future, and synch up periodic events. However, calling this a "Timeline" class seems to have been limited to my professor himself, as googling for whether a certain API features "Timeline" functionality has been a fruitless endeavor. Is there some more common name for this kind of functionality? Does Unity have any pre-existing methods to coordinate the scheduling of events like this, or is this the kind of thing that needs to be built onto the engine? And if it does, I'd appreciate being pointed towards some tutorials!

    Read the article

  • Where can I find video and audio drivers for my netbook?

    - by Ari
    I got a little netbook yesterday (an Acer Aspire One) for schoolwork and read in numerous reviews Ubuntu was a must. I dove into it blindly and downloaded 12.04 LTS and though I admit it's faster than before, I have no idea how it works yet and was stupid enough to get rid of the Windows 7 Starter that was on there before completely. I can see myself getting used to the rest of Ubuntu, but the main issue is that it seems to have eaten all of my drivers and I don't know where to get any Ubuntu compatible ones. Any video at all is jilty and kind of creepy and any audio is very distorted and REALLY creepy. I understand that this is a tiny $150 typing and light web use school device I'm talking about, but I feel really stupid making the poor thing worse than it was before and not knowing what to do about it. Does anyone know how to install graphics/sound drivers? Or, alternatively, does anyone know where I can get a copy of Windows 7 Starter. I can see myself liking this if the problems are fixable, but if not I still have the product key and would be able to tackle the problem by myself with that.

    Read the article

  • Dot Matrix printers setup...

    - by Parhs
    Hello! I am using debian which is similar to ubuntu. They have 7 dot matrix printers some very old like this one http://www.omnidatasys.net/product/desc_printer_ti880.htm which works from 1979 daily and at text is faster than many inkjects. I believe that it has his own language... Sending text to serial port (port server) prints garbage. However i think is prints only capital english up to 95 asccii and greek and the rest up to 127 i think greek capital.(special chip ) Sending english capital letters prints garbage i think but i amnt sure... i will try again... The other printer are ESC/P compatible and i use generic epson driver provided from ghostscript... However i think that sending text via lp -dpr1 filename It prints the text as a grafic...Changing from printer font face(courier,times roman etc) or pitch has no effect... I am wondering if is there any work arround for this? In AIX they claim that lp command printed output as text as it prints and cobol programs send raw text to to lp printers . However in AIX they use some custom filters for the printers and has more options for dot matrix printers.. I would like to know if there is a solution for this.. To avoid graphics mode for text and change font face somehow.. The most Straight-through approach would be to use no driver ,just send ESC/P from cobol but this requires too much work... Thank you again!

    Read the article

  • LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology

    - by Eric Z Goodnight
    With image technology progressing faster than ever, High-Def has become the standard, giving TV buyers more options at cheaper prices. But what’s different in all these confusing TVs, and what should you know before buying one? If you’re considering buying a television this Holiday season for a loved one (or simply for yourself), it can be a big help to know what to look for. Take a look to find out what sets HD televisions apart, learn some of the confusing jargon associated with them, and see a comparison of four of the types of HDTVs commonly sold today. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • Additional new content SOA & BPM Partner Community

    - by JuergenKress
    Oracle Fusion Middleware 12c (12.1.2.0.0) Released - Download (OTN, eDelivery) Whitepaper: Next Generation Service Integration Platform - PDF SOA Maturity This article in the Industrial SOA series offers exploration of the fundamentals of applying a factory approach to modern service-oriented software development. Read the article. Enterprise Service Bus The fifth article in the Industrial SOA series answers to some of the most important questions about the use of an enterprise service bus, using concrete examples to clarify areas of application that can be deemed correct for ESBs. Read the article. DevOps, Cloud, and Role Creep DevOps and cloud computing are changing the IT industry - and changing IT roles. An panel of community members discusses what’s happening and how it might affect your job. Listen to the podcast. Industrial SOA - Now chapters 1 to 5 available | Torsten Winterberg White Paper: Cloud Integration - A Comprehensive Solution White Paper: Next Generation Service Integration Platform : SOA Suite on Exalogic IT Briefcase Interview: An Integrated Approach to Mobile, Cloud, and API Management Technologies with Oracle Fusion Middleware Webcast: Oracle Cloud Integration – Information Week Webcast eBook: Oracle SOA Suite – In the Customers’ Words Podcast: Cloud Integration Transitioning from TIBCO to Oracle SOA Suite – Part 1 Events: Oracle Simplifying Integration of Cloud and On-Premise New B2B Book Published for Oracle SOA B2B 11g Get Fast-Data Accelerator in Your Hands Today: Mobile Data Offloading for Telecom Fast Data Accelerator - Blog New Oracle Process Accelerators in Financial Services & Teleco Detect, Analyze, Act Fast with BPM Improving the Quality of Healthcare with BPM Engineers Australia Improves and Automates Business Processes and Completes Engineer Enrollments up to 90% Faster with Middleware Platform - Case Study | PPT Specialized Partner Ataway on BPM Practice - Video eProseed Delivers Processes Skillfully with Oracle BPM Suite - Video Yarra Valley Water Uses SOA and BPM for Orchestration, Re-use and Visibility - Video Victoria University Discusses Oracle SOA & Oracle BPM - Video SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Welcome to the FMW Install and Admin Proactive Team Blog

    - by Daniel Mortimer
    IntroductionWelcome to the Fusion Middleware Install and Administration Proactive Support blog.  This is our first post, so let's begin by introducing ourselves and our mission. Who We AreWe are a small team of support engineers based in Europe.  Our expertise covers all matters related to the installation and administration of Oracle Application Server 10g, Oracle Fusion Middleware 11g and future versions to come. We particularly focus on core components such as the Installers and Configuration Wizards Web Tier ( Oracle HTTP Server ) OPMN Enterprise Manager Console for Application Server as well as general questions / problems relating to patching, maintenance and architecture. Our Mission Improve the customer experience Enable customers to avoid / prevent issues when working with our products Enable faster resolution of problems when they occur Our Activities Enhancement and maintenance of our knowledge base In particular, develop and maintain special content such as the Fusion Middleware Information Centers and Lifecycle Support Advisors Seek continuous improvement of the product documentation Contribute to the Fusion Middleware Support News Moderation of the "Oracle Application Server" support community Participate in the Support Advisor Webcast program Involved in the Lifecycle of diagnostic tools such as RDA and OCM User Acceptance Testing Logging of enhancements and health check ideas Provide feedback to product management / development Logging of product bugs and enhancements Suggest improvements that could be made to web sites like OTN Promote new support documents, tools via channels such as Newsletter and Social Media We hope that this blog will be a two-way communication as we are interested in feedback on what we can improve. Many suggestions we can act on immediately while others may take more time, but all of them will be acknowledged and followed up.Thank you for your time and we look forward to both informing and working with you.Postscript: Many links you will find in our blog entries will require a login to My Oracle Support. For readers who do not have a login, please accept our apologies - when and where possible we will endeavour to ensure the links will supplement rather than replace wording in the blog entries.

    Read the article

  • How can a large, Fortran-based number crunching codebase be modernized?

    - by Dave Mateer
    A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: Other people can maintain it over next 10 year cycle Get faster at tweaking the software Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: Use Python and CUDA from Nvidia Rewrite in a functional language. For example, F# or Haskell Go cloud based and use something like Hadoop and Java Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See Medical Imaging and a guy who has written a Fortran wrapper for CUDA, Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >