Search Results

Search found 11321 results on 453 pages for 'shared libraries'.

Page 275/453 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • How to educate business managers on the complexity of adding new features? [duplicate]

    - by Derrick Miller
    This question already has an answer here: How to educate business managers on the complexity of adding new features? [duplicate] 3 answers We maintain a web application for a client who demands that new features be added at a breakneck pace. We've done our best to keep up with their demands, and as a result the code base has grown exponentially. There are now so many modules, subsystems, controllers, class libraries, unit tests, APIs, etc. that it's starting to take more time to work through all of the complexity each time we add a new feature. We've also had to pull additional people in on the project to take over things like QA and staging, so the lead developers can focus on developing. Unfortunately, the client is becoming angry that the cost for each new feature is going up. They seem to expect that we can add new features ad infinitum and the cost of each feature will remain linear. I have repeatedly tried to explain to them that it doesn't work that way - that the code base expands in a fractal manner as all these features are added. I've explained that the best way to keep the cost down is to be judicious about which new features are really needed. But, they either don't understand, or they think I'm bullshitting them. They just sort of roll their eyes and get angry. They're all completely non-technical, and have no idea what does into writing software. Is there a way that I can explain this using business language, that might help them understand better? Are there any visualizations out there, that illustrate the growth of a code base over time? Any other suggestions on dealing with this client?

    Read the article

  • Why not XHTML5?

    - by eegg
    So, HTML5 is the Big Step Forward, I'm told. The last step forward we took that I'm aware of was the introduction of XHTML. The advantages were obvious: simplicity, strictness, the ability to use standard XML parsers and generators to work with web pages, and so on. How strange and frustrating, then, that HTML5 rolls all that back: once again we're working with a non-standard syntax; once again, we have to deal with historical baggage and parsing complexity; once again we can't use our standard XML libraries, parsers, generators, or transformers; and all the advantages introduced by XML (extensibility, namespaces, standardization, and so on), that the W3C spent a decade pushing for good reasons, are lost. Fine, we have XHTML5, but it seems like it has not gained popularity like the HTML5 encoding has. See this SO question, for example. Even the HTML5 specification says that HTML5, not XHTML5, "is the format suggested for most authors." Do I have my facts wrong? Otherwise, why am I the only one that feels this way? Why are people choosing HTML5 over XHTML5?

    Read the article

  • Two Sessions All Humans Should Watch Right Now

    - by Geertjan
    At conferences, I definitely prefer technical sessions over any other kind of session. That's partly because I want to walk away from a conference with new libraries and APIs to play with, such as the AT&T ARO tool that I've been blogging about over the past few days thanks to being introduced to it in a great session by Doug Sillars at Oredev, in Malmo, Sweden. I only say the above to set the scene. And the scene is that I avoid sessions that deal with "agile topics" or whatever that means. I mean those sessions where you're meant to reflect on some way you're developing nothing in particular and then come away with new ways of doing that. I avoid those. Not because I don't necessarily like those or think I have nothing to learn, both of which I don't (or do, depending on how you read double negatives), but because there are so many sessions to attend that I focus on those that actually give me more technical knowledge that I can do something with immediately. Having said all that, here's two absolutely wonderful sessions (and probably many more but I really liked these two) presented at Oredev over the last few days, one by JB Rainsberger and the other by Woody Zuill, both very nice people who I met for the first time during the last few days, and who aren't paying me to promote them, and who're still struggling to figure out how to say my name. Whether you're a developer or manager or whatever you are, take this on trust, and simply watch these screencasts, hey, at most you're going to lose two hours of your life that you would've spent doing something else: Speaking for myself, I'm going to be watching both these presentations again several times in my life, that's for sure.

    Read the article

  • Do you do custom tests/work for a potential employer before the interview?

    - by Chuck Stephanski
    Every shop I've worked at we have followed what I thought was a standard hiring sequence: 1. solicit resumes 2. phone screen applicants we are interested in 3. in person interview 4. in some shops 2nd in person interview (just w/ CEO for example) Most places I apply to follow something along these lines. But some shops want to give me a test or ask me to build something after I send in my resume. They usually say "congratulations, you've made it to stage 2 of our hiring process!" But for all I know they are asking every applicant they get a resume from to do the same thing. This annoys me because it doesn't scale. If I'm applying for a lot of positions I can't spend all my time doing work that potentially can only land me one job. Plus there's no shared dedication to the process. If we do a 45 minute phone screen, that's 45 minutes the company commits to me and 45 minutes I commit to the company in hopes of a potential match. Do you guys have a policy regarding these sorts of things?

    Read the article

  • Partitions for dual boot install with Windows

    - by Tim
    Following is the layout of the current partitions of my single hard drive viewed from Windows 7: C: has Windows 7 system files and my personal data; Q: for Lenovo recovery; SYSTEM_DRV: for Windows boot files; My goals are: to create another partition D: for my personal data, and dedicate C: for Windows system files and applications only. to install Ubuntu alongside Windows. D: will be shared between the two OSes. My questions are: Is it correct that the free space generated from shrinking C: will only be able to create an extended partition, since there are already 3 primary partitions? So must D: be one logical partition on the extended partition, just as the partitions for Ubuntu will be? Will this be bad sometime? If yes, other better solutions? What are the good utilities to accomplish the partition tasks? Can Ubuntu installer solely handle them? Or better to have some of the jobs done in Windows with some recommended softwares? Thanks and regards!

    Read the article

  • Working with Git on multiple machines

    - by Tesserex
    This may sound a bit strange, but I'm wondering about a good way to work in Git from multiple machines networked together in some way. It looks to me like I have two options, and I can see benefits on both sides: Use git itself for sharing, each machine has its own repo and you have to fetch between them. You can work on either machine even if the other is offline. This by itself is pretty big I think. Use one repo that is shared over the network between machines. No need to do git pulls every time you switch machines, since your code is always up to date. Never worry that you forgot to push code from your other non-hosting machine, which is now out of reach, since you were working off a fileshare on this machine. My intuition says that everyone generally goes with the first option. But the downside I see is that you might not always be able to access code from your other machines, and I certainly don't want to push all my WIP branches to github at the end of every day. I also don't want to have to leave my computers on all the time so I can fetch from them directly. Lastly a minor point is that all the git commands to keep multiple branches up to date can get tedious. Is there a third handle on this situation? Maybe some third party tools are available that help make this process easier? If you deal with this situation regularly, what do you suggest?

    Read the article

  • Is the addition of a duration to a date-time defined in ISO 8601?

    - by Benjamin
    I've writing a date-time library, and need to implement the addition of a duration to a date-time. If I add a 1 month duration: P1M to the 31st March 2012: 2012-03-31, does the standard define what the result is? Because the resulting date (31st April) does not exist, there are at least two options: Fall back to the last day of the resulting month. This is the approach currently taken by the ThreeTen API, the (alpha) reference implementation of JSR-310: ZonedDateTime date = ZonedDateTime.parse("2012-03-31T00:00:00Z"); Period duration = Period.parse("P1M"); System.out.println(date.plus(duration).toString()); // 2012-04-30T00:00Z Carry the extra day to the next month. This is the approach taken by the DateTime class in PHP: $date = new DateTime('2012-03-31T00:00:00Z'); $duration = new DateInterval('P1M'); echo $date->add($duration)->format('c'); // 2012-05-01T00:00:00+00:00 I'm surprised that two date-time libraries contradict on this point, so I'm wondering whether the standard defines the result of this operation?

    Read the article

  • RequireJS: JavaScript for the Enterprise

    - by Geertjan
    I made a small introduction to RequireJS via some of the many cool new RequireJS features in NetBeans IDE. I believe RequireJS, and the modularity and encapsulation and loading solutions that it brings, provides the tools needed for creating large JavaScript applications, i.e., enterprise JavaScript applications. &amp;amp;lt;span id=&amp;amp;quot;XinhaEditingPostion&amp;amp;quot;&amp;amp;gt;&amp;amp;lt;/span&amp;amp;gt; (Sorry for the wobbly sound in the above.) An interesting comment by my colleague John Brock on the above: One other advantage that RequireJS brings, is called lazy loading of resources. In your first example, everyone one of those .js files is loaded when the first file is loaded in the browser. By using the require() call in your modules, your application will only load the javascript modules when they are actually needed. It makes for faster startup in large applications. You could show this by showing the libraries that are loaded in the Network Monitor window. So I did as suggested: Click the screenshot to enlarge it and notice how the Network Monitor is helpful in the context of RequireJS troubleshooting.

    Read the article

  • Independent HTML5 Physics Game: Any Feedback? [closed]

    - by mndoftea
    I've been independently developing a physics-based HTML5 game. I haven't used any libraries or engines; all the code, including the physics, is my own. It is free for a while on the Chrome Web Store and I was hoping that I could get some feedback on it. You can get it for Chrome here: https://chrome.google.com/webstore/detail/dbnmkpcomailjochphnmfklofkmgenci. I know this is not a normal question, but I'm happy for answers to be abstracted/generalized for broader use. Im asking here because I don't know anyone else personally who does this stuff. Any thoughts, comments or ideas you might have would be greatly appreciated! The physics system is written in JavaScript and works by setting up the differential equations of motion (plus a few conditions) and evaluating them numerically using the Euler method. The graphics are done through the HTML5 canvas and the music is done through the audio element. (Said music is in the public domain by the way). You can see the code by going to VIewView Source in Chrome.

    Read the article

  • Creating movement path displays in a top-down 2d RTS

    - by nihohit
    My game is a top-down 2d RTS coded in C# using SFML's libraries. I want that during unit selection, a unit will display it's movement path on the map. Currently, after the path is computed as a list of directions ({left, up,down, down, down, left}, as an example), it's sent to the graphical component to create it's UI equivalent, and here I'm having some problems. current, these I've checked three ways to do it: compute the size of the image (in the example above it'll be a 3*2 rectangle) and create an invisible rectangle, and then go over the directions list and mark each spot with a visible point, so as to get a continous line. This system is slightly problematic because of the amount of large images that I need to save, but mostly because I have a lot of fine detail onscreen, and a continous line obstructs the view. again, compute the size of the image, but now create several (let's say 4) invisible images of that size, and then instead of a single continous line I'll switch between the four images, in each will appear only a fourth of the spots, in a way which creates a path animation. This is nicer on the eye, but here the memory demands, and the amount of time needed to compute each such image-loop is significant. Just create a list of single markers, each on a different spot on the path. This is very quick & easy on memory, but too sparse. Is there a simple or resource-light system to create path-animations?

    Read the article

  • Workflow: Deploy Operating Systems

    - by Owen Allen
    The Deploy Operating Systems workflow is a workflow document that we added recently. It shows you how to get operating systems up and running in your environment. It's mostly linear, but it's a bit more complicated than some of the others. It's built around a pair of images. In both images, the left side shows the prerequisites for the whole process. Before you can deploy operating systems, you have to have Ops Center fully installed, with libraries set up and hardware already discovered. Once you've done that preparation, the first image walks you through all of the OS deployment steps. First you discover existing operating systems, then you provision Oracle Solaris 10 or Oracle Solaris 11. If you're not planning on using virtualization, then your deployment is done, and you're directed to the operate workflows. If you are interested in virtualization, though, you go on to the second image: The second image walks you through deploying virtualization, sending you to the Deploying Oracle Solaris 10 Zones, Deploying Oracle Solaris 11 Zones, or Deploying Oracle VM Server for SPARC workflows, depending on what kind of virtualization you're planning on using. Once you've done that, you're ready to go on to the operation workflows.

    Read the article

  • CLR Profiler Allocated Bytes and XNA ContentManager

    - by Vackup
    I've been fighting with XNA ContentManager and memory allocations for some weeks because I'm trying to port my game from XNA (Windows) to ExEn / Monotouch (iphone). The problem is that after playing a few levels, my game exits unexpectedly on a real iPhone device (not simulator). Profiling memory usage on Windows with CLRProfile, I found some useful stuff but I also found something I dont understand. If I use 2 ContentManagers (1 for shared assets and 1 for level assets), when profiling, "Allocated Bytes" grows and grows after level through level but Memory consumption measured by Windows Task Manager stays constant (down when I unload the content manager and up again when I load content). Obviously, I contentManager.Unload() when level ends. After a few levels my game exits unexpectedly on an iPhone device. If I use 1 content manager, "CRLProfiler Allocated Bytes" stays constant on Windows and on the iPhone; I can play the game normally and it doesnt exit unexpectedly. I use the same assets level through level. It seems like in ios (iPhone) when loading and unloading the same assets, it allocates memory and consumes all device memory, so the ios kill it. Can anybody explain me how this really works? I've read quite a bit, but I still don't understand what's going on.

    Read the article

  • Fast, accurate 2d collision

    - by Neophyte
    I'm working on a 2d topdown shooter, and now need to go beyond my basic rectangle bounding box collision system. I have large levels with many different sprites, all of which are different shapes and sizes. The textures for the sprites are all square png files with transparent backgrounds, so I also need a way to only have a collision when the player walks into the coloured part of the texture, and not the transparent background. I plan to handle collision as follows: Check if any sprites are in range of the player Do a rect bounding box collision test Do an accurate collision (Where I need help) I don't mind advanced techniques, as I want to get this right with all my requirements in mind, but I'm not sure how to approach this. What techniques or even libraries to try. I know that I will probably need to create and store some kind of shape that accurately represents each sprite minus the transparent background. I've read that per pixel is slow, so given my large levels and number of objects I don't think that would be suitable. I've also looked at Box2d, but haven't been able to find much documentation, or any examples of how to get it up and running with SFML.

    Read the article

  • Java and .NET cost of use [on hold]

    - by 1110
    I work with .NET technology stack for about 4 years. I am learning and enjoy working with ASP MVC framework and I never did anything serious in other languages. This is not the question like what is better (I read all similar questions). What interest me is the cost of switching. For example: If you are about to start a start-up company today and you are in my situation not too much money, some good idea that you think others will use and have a knowledge of .NET. In my head I have a few questions that I can't answer and I know that somebody with experience can: 1) Java & .NET hosting. Suppose shared hosting is not good enough anymore, your site has grown and you need more resources. How much Java services is cheaper compared to .NET? 2) I didn't follow hype about ORACLE will kill java long time. Does oracle show interest in investing in java. I mean is is safe to bet on java as a technology when starting start-up (basically did oracle show some will to destroy java platform)? 3) I am not sure what I am asking here. When you use Java you can use JEEE stack or Java with third party stack (spring, hibernate, maven etc.). I saw a lot of project that work with second option if web application is not enterprise level but social networking site for example which stack is best pick? Summary of this question is is it safe to jump in to Java learn it and build product based on it. It's not too hard for me to learn it. But how much can I get from it.

    Read the article

  • Help with design structure choice: Using classes or library of functions

    - by roverred
    So I have GUI Class that will call another class called ImageProcessor that contains a bunch functions that will perform image processing algorithms like edgeDetection, gaussianblur, contourfinding, contour map generations, etc. The GUI passes an image to ImageProcessor, which performs one of those algorithm on it and it returns the image back to the GUI to display. So essentially ImageProcessor is a library of independent image processing functions right now. It is called in the GUI like so Image image = ImageProcessor.EdgeDetection(oldImage); Some of the algorithms procedures require many functions, and some can be done in a single function or even one line. All these functions for the algorithms jam packed into ImageProcessor can be pretty messy, and ImageProcessor doesn't sound it should be a library. So I was thinking about making every algorithm be a class with a shared interface say IAlgorithm. Then I pass the IAlgorithm interface from the GUI to the ImageProcessor. public interface IAlgorithm{ public Image Process(); } public class ImageProcessor{ public Image Process(IAlgorithm TheAlgorithm){ return IAlgorithm.Process(); } } Calling in the GUI like so Image image = ImageProcessor.Process(new EdgeDetection(oldImage)); I think it makes sense in an object point of view, but the problem is I'll end up with some classes that are just one function. What do you think is a better design, or are they both crap and you have a much better idea? Thanks!

    Read the article

  • How do I properly implement zooming in my game?

    - by Rudy_TM
    I'm trying to implement a zoom feature but I have a problem. I am zooming in and out a camera with a pinch gesture, I update the camera each time in the render, but my sprites keep their original position and don't change with the zoom in or zoom out. The Libraries are from libgdx. What am I missing? private void zoomIn() { ((OrthographicCamera)this.stage.getCamera()).zoom += .01; } public boolean pinch(Vector2 arg0, Vector2 arg1, Vector2 arg2, Vector2 arg3) { // TODO Auto-generated method stub zoomIn(); return false; } public void render(float arg0) { this.gl.glClear(GL10.GL_DEPTH_BUFFER_BIT | GL10.GL_COLOR_BUFFER_BIT); ((OrthographicCamera)this.stage.getCamera()).update(); this.stage.draw(); } public boolean touchDown(int arg0, int arg1, int arg2) { this.stage.toStageCoordinates(arg0, arg1, point); Actor actor = this.stage.hit(point.x, point.y); if(actor instanceof Group) { ((LevelSelect)((Group) actor).getActors().get(0)).touched(); } return true; } Zoom In Zoom Out

    Read the article

  • How to set individual NTFS partitions permissions behaviour for each user account?

    - by ryniek
    I have two NTFS partitions (DOWNLOADS for downloaded files and VM for my VirtualBox .vdi file) for which i must have full permissions for my allday use account. They should be also automounting when i login to this account. But i've also set Guest account for guests. For Guest account, i want make VM partition fully disabled and invisible (and thus it mustn't automount) but DOWNLOADS partition should be shared with limited privileges with Guest account. Editing fstab i'm able to share DOWNLOADS partition with Guest on limited privileges but VM can be only set to limited and have disabled automounting - so guest can't mount it but it still can be seen in Nautilus, plus I must always mount it manually when i login to allday account. Is there some trick to make what i want? Here's my fstab config: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda1 : UUID=35e66658-5ee9-40cf-bf56-8204959e3df0 / xfs defaults 01 #Entry for /dev/sda2 : UUID=26c714cf-4236-45e7-9c46-cfcf91a215ae /home xfs defaults 02 #Entry for /dev/sda5 : UUID=1315BCB027C44639 /media/DOWNLOADS ntfs-3g auto,uid=1000,gid=1000,umask=0022,nodev,locale=pl_PL.utf8 0 0 #Entry for /dev/sda6 : UUID=60FF39EB72B72264 /media/VM ntfs noauto,uid=1000,gid=1000,umask=0077,nodev,locale=pl_PL.utf8 0 0 #Entry for /dev/sda7 : UUID=c52411f5-105c-45d1-971f-412f962c350e none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Thanks

    Read the article

  • How to access Ubuntu Server from local PC?

    - by Roland
    Today I installed my first web server, which is Ubuntu 12.04 LTS. I got Apache, PHP and MySql working, there is even MyPHPAdmin! Everything is working fine on that PC, but the problem is that I have no idea how to connect to this server from my PC. Just to clarify- I got one PC that I work on and got another one, which has Ubuntu Server running. I even managed to connect them through the router, which I made to work as a switch. I can see the Ubuntu Server on my Windows PC in "Network", but it's empty, I can't see any files. I tried to share a folder etc/www on Server, but it shows an error saying something about right, that I'm not this folder's owner. I guess I'm not doing the right thing at all, am I? Even if I could see shared folder on my Windows PC- I would still be not able to type "somedomain.com" on Windows PC and access for example index.php or MySql database. So, the question is- how do I configure Ubuntu Server to be accessible from Windows PC?

    Read the article

  • When creating a GUI wizard, should all pages/tabs be of the same size? [closed]

    - by Job
    I understand that some libraries would force me to, but my question is general. If I have a set of buttons at the bottom: Back, Next, Cancel?, (other?), then should their location ever change? If the answer is no, then what do I do about pages with little content? Do I stretch things? Place them in the lone upper left corner? According to Steve Krug, it does not make sense to add anything to GUI that does not need to be there. I understand that there are different approaches to wizards - some have tabs, others do not. Some tabs are lined horizontally at the top; others - vertically on the left. Some do not show pages/tabs, and are simply sequences of dialogs. This is probably a must when the wizard is "non-linear", e.g. some earlier choices can result in branching. Either way the problem is the same - sacrifice on the consistency of the "big picture" (outline of the page/tab + location of buttons), or the consistency of details (some tabs might be somewhat packed; others having very little content). A third choice, I suppose is putting extra effort in the content in order to make sure that organizing the content such that it is more or less evenly distributed from page to page. However, this can be difficult to do (say, when the very first tab contains only a choice of three things, and then branches off from there; there are probably other examples), and hard to maintain this balance if any of the content changes later. Can you recommend a good approach? A link to a relevant good blog post or a chapter of a book is also welcome. Let me know if you have questions.

    Read the article

  • How to encourage domain experts familiar only with C into a C++ opensource project [closed]

    - by paperjam
    Possible Duplicate: How to persuade C fanatics to work on my C++ open source project? I am launching an open-source project into a space where a lot of development is done Linux-kernel-style, i.e. C-language with a low-level mindset. My project is broad and complex and uses aspects of the C++ language and libraries, including the Boost library to best effect for simple, slightly syntactically sweetened, elegant and well structured high level code. We are using C++ templates too to avoid duplication of code and for static polymorphism in code specialisation for performance. Many of the experts in this field are well used to pure C-language projects. How can I persuade them to contribute to my idiomatic C++ based project? I have no objection to C-language subcomponents or the use of a C-like subset for parts of the project so that might be part of the answer. This is a rewritten and retagged rehash of my previous question that was closed. Apologies to those who read and answered for it not being constructive. I hope this new question is viewed as constructive. Please note that this is not a language advocacy question and please keep answers in that spirit.

    Read the article

  • NFS mount of /var/www to OS X

    - by ploughguy
    I have spent 2 hours trying to create an NFS mount from my Ubuntu 10.04 LTS server to my OS X desktop system. Objective: three way file compare between the code base on the Mac, the development system on the local Linux test system, and the hosted website. The hosted service uses cpanel so I can mount a webdisk - easy as pie - took 10 seconds. The local Ubuntu box, on the other hand - nothing but pain and frustration. Here is what I have tried: In File Browser, navigate to /var/www/site and right-click. Select share this folder. Enter sharename wwwsite and a comment. Click button "Create Share". Message says - you can only share file systems you own. There is a message on how to fix this, but the killer is that this is sharing by SMB. It will change the LFs to CR-LFs which will affect the file comparison. So forget this option. In a terminal window, run shares-admin (I have not been able to convince it to give me the "Shared Folders" option in the System Administration window - Maybe it is somewhere else in the menu, but I cannot find it) define an NFS export. Enter the path /var/www/site, select NFS enter the ip address of the iMac and save. On the mac, try to mount the file system using the usual methods - finder, command line "mount" command - not found. Nothing. Tried restarting the linux box in case there is a daemon that needs restarting - nothing. So I have run out of stuff to do. I have tried searching the documentation - it is pretty basic. The man page documentation is as opaque as ever. Please, oh please, will someone help me to get this @38&@^# thing to work! Thanks for reading this far... PG.

    Read the article

  • How to fix Sketchup in Wine when tool starts, but displays empty workspace?

    - by Chaos_99
    I've installed wine 1.6 and winetricks in an Linux Mint 15 system, then downloaded the latest Sketchup2013 'Make' Windows-Installer and installed through wine. I've prepared the wine environment with starting as WINEARCH=win32, installed corefonts and ie8 and enabled the override for the 'riched20' libraries. (I've no idea what the last bit does, but it was advised in some guides.) I've also tried without these steps. Only the win32 seems to make a difference, as the installer will complain about not finding SP2 otherwise. Sketchup is installed successfully and starts, but displays an empty viewport. The program is responsive and everything works, it's just that you can't see anything. I don't get any OpenGL error and the registry entries seem fine, according to the OpenGL issue workarounds floating around the net. I still think it has something to do with OpenGL not working properly, maybe not in the wine environment, but in the linux system? I'm running on a Lenovo W520 with Nvida/Intel hybrid cards, but only the NVida card is active and the properitary nvidia (319) drivers are installed. GLXGears runs fine, but clamps at 2x the refresh rate. glxinfo outputs direct rendering: Yes server glx vendor string: NVIDIA Corporation server glx version string: 1.4 I'm willing to try any linux or wine OpenGL tests to narrow down the problem, if you can offer any advise on what to use.

    Read the article

  • Isn't Java a quite good choice for desktop applications?

    - by tactoth
    At present most applications are still developed with C++, painfully. Lack of portability, in compatible libraries, memory leaks, slow compilation, and poor productivity. Even if you pick only a single from these shortages, it's still a big headache. However the surprising truth is that C++ remains the first choice for desktop applications. Compared to C++ Java has lots of advantages. The success in server side development shows that the language itself is good, Swing is also thought to be as programmer friendly as the highly recognized QT framework (No, never say even a single word about MFC!). All the disadvantages of C++ listed above has a solution in Java. "Performance!", Well that might still be the problem but to my experience it's a slight problem. I'd been using Java to decode some screen video and generate key frames. The video has a duration of more than 1 hour. The time spent on an average machine is just 1 minute. With C++ I don't expect even faster speed. In recent days there are many news on the JIT performance improvements, that make us feel Java is gradually becoming very suitable for desktop development, without people realizing it. Isn't it?

    Read the article

  • Why am I seeing so many instantiable classes without state?

    - by futlib
    I'm seeing a lot of instantiable classes in the C++ and Java world that don't have any state. I really can't figure out why people do that, they could just use a namespace with free functions in C++, or a class with a private constructor and only static methods in Java. The only benefit I can think of is that you don't have to change most of your code if you later decide that you want a different implementation in certain situations. But isn't that a case of premature design? It could be turned into a class later, when/if it becomes appropriate. Am I getting this wrong? Is it not OOP if I don't put everything into objects (i.e. instantiated classes)? Then why are there so many utility namespaces and classes in the standard libraries of C++ and Java? Update: I've certainly seen a lot examples of this in my previous jobs, but I'm struggling to find open source examples, so maybe it's not that common after all. Still, I'm wondering why people do it, and how common it is.

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >