Search Results

Search found 37931 results on 1518 pages for 'computer case'.

Page 555/1518 | < Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • College for Game Development [closed]

    - by Cole Adams
    I am currently a Freshman Computer Science Major at Samford University, but I am realizing that the actual field I want to get into is Game Development. I go to all of these classes that are supposed to make you well rounded that have nothing to do with what I want to do and frankly, after 18 years of schooling, I am sick of having to be in classes like that. I want to go to a Game Design/Development school where that is the priority and I am not overburdened with useless classes. At this point I am so tired of the Samford classes already that I am heavily considering taking next semester off and just getting a job and focusing on learning programming on my own or something like that. My question is what would be some good schools to apply to for enrollment in 2013 and what does it take to get into these schools? Thanks in advanced.

    Read the article

  • Connecting to an Amazon AWS database [closed]

    - by Adel
    so I'm a bit overwhelmed/bewildered by the whole concept of networking/remote-desktop , etc. The context is that - in my company I need to access a remote database. The standard way I use is to first connect using a VPN-Client( called Shrew Soft Access manager), then once that says: "network device configured tunnel enabled" I'm good to connect using windows "Remote Desktop Connection" . But now our company set up an Amazon AWS database, and I'm told I need to connect, and I ony need to use RDP. So I tried the standard windows one - but it doesn't work. On wikipedia , I looked up remote desktop sftware and downloaded one called VNC Viewer. but it doesn't work. Any advice/tips/comments appreciated EDIT: YAYA! I finally got a little more connected . I had to use my username as a fully qualified name: Computer: XYZ.XYZ.XYZ.XYZ USERNAME: XYZ.XYZ.XYZ.XYZ\aazzam

    Read the article

  • Temporary dimming of desktop in 12.04

    - by deshmukh
    I am running Ubuntu 12.04 (almost default install regularly updated) Unity interface on ASUS X53U (AMD Brazos Dual Core C60 with 2GB RAM). On launching Thunderbird and Firefox, the application dims and the cursor changes to wait mode. In case of Thunderbird, this is most pronounced with the wait time of up to a minute. Memory status checked with free indicates around 500MB of free memory on such occasions. The OS is stable and I can switch to a different work-space, etc. What could this be? Is this something normal?

    Read the article

  • Disable suspend / hibernate via the policykit

    - by redonath
    I am trying to run Lubuntu 12.04, but if the computer suspends I am unable to bootup again. Instead I see the bios post, the hard disk light flickers once and I have to install again (I have tried to re-install grub2). I am new to Linux and what I found that best answered my question was posted by James Henstridge. The instructions say to create disable-shutdown.pkla in etc/polkit-1/50-local.d/ but this directory does not exist, so do I create a folder titled 50-local.d in poolkit-1 or do I have to place this file elsewhere?

    Read the article

  • Decrease filesize when resizing with mogrify

    - by plua
    I love the command line options of imagemagick. Mogrify is great to resize images and change quality, which is what I use most often. However, I have noted that the filesize if often larger than what it should be. Especially with small images. For instance, I have a regular 640px (width) photo, which I change to quality 80 and a width of 80px: mogrify -quality 80 -resize 80 file.jpg Works well and my image gets resized and the quality is changed to 80. However, the filesize is around 40Kb. For such a tiny image, that is huge! When I use mtPaint, and open the file and save it (not changing anything, just CTRL+O, CTRL+S), the filesize decreases with more than 95% to less than 2Kb! I have seen this is often the case. What goes wrong?

    Read the article

  • Knowing so much but application is a problem?

    - by Moaz ELdeen
    In my work, my friends always tell me, you know so much about computer science, electronics engineering,..etc. But I have difficulty in applying them and my code is crap. How to solve that problem? Will I be better or programming isn't my career? For example, yes I know OCTree that is used for space partitioning in games and it is used for optimization, did I implement it? No, but I know about it in principle.. Do I know algorithms like Sorting, Searching,..etc? Yes, and I know them pretty well, but didn't implement them.. When I get a task, I struggle in applying the things that I know...

    Read the article

  • Joining Two MKV files in Ubuntu?

    - by Ryan McClure
    I have an opera that I'm ripping to my computer in MKV format with Handbrake. This opera is on two discs. Is there a way to join the resulting MKV's together? They will have the same bitrate, resolution, etc. If I do this, can I keep chapters from both MKV files organized? And, since I have subtitles in the file (not burnt in), will they still stay intact? I'm not too sure if this question is off-topic or not. If it is, feel more than free to delete it. :)

    Read the article

  • Naming: objectAction or actionObject?

    - by DocSalvage
    The question, Stored procedure Naming conventions?, and Joel's excellent Making Wrong Code Look Wrong article come closest to addressing my question, but I'm looking for a more general set of criteria to use in deciding how to name modules containing code (classes, objects, methods, functions, widgets, or whatever). English (my only human language) is structured as action-object (i.e closeFile, openFile, saveFile) and since almost all computer languages are based on English, this is the most common convention. However, in trying to keep related code close together and still be able to find things, I've found object-action (i.e. fileClose, fileOpen, fileSave) to be very attractive. Quite a number of non-English human languages follow this structure as well. I doubt that one form is universally superior, but when should each be used in the pursuit of helping to make sure bad code looks bad?

    Read the article

  • How to handle shoot instructions, in a multiplayer TD

    - by Martin Elvar Jensen
    I'm currently working on a Multiplayer Tower Defense game, using ImpactJS & Node. I seek some clarification about how to handle projectiles from towers, let me explain. So the server is running the master game, and the clients just follow the instruction from the server. Lets say there is about 20 towers on the stage, all needs instructions for which creeps to shoot at. Now lets say each towers fires twice in a second, that's 40 shots each second, (worst case scenario) which is 40 request per second to each client, would't this casue alot of stress to the server, saying that we have 50 games running the same time. So what i am really asking, is this method inefficient, and is there a smarter way to handle all these instructions. Thank you.

    Read the article

  • Windows 8.1 Apps Now in Bookstores

    - by Stephen.Walther
    My book Windows 8.1 Apps with HTML5 and JavaScript is in bookstores now and it is available for purchase from Amazon. Extensively updated for the release of Windows 8.1, this book covers all of the new features of the WinJS 2.0 library such as the Repeater, SearchBox, WebView, and NavBar controls and the new WinJS Scheduler. I wrote a new sample app for this edition of the book  – the MyTasks app — that demonstrates how to build a Windows Store app that interacts with Windows Azure Mobile Services. If you are currently using a Windows 8.1 computer then you can install the MyTasks sample app from the Windows Store. I’ve written a summary of the new features included in Windows 8.1 for app developers which you can read here: Top 10 Changes for Building Windows Store Apps with Windows 8.1

    Read the article

  • Access secondary hard disk from Virtual Machine

    - by Frank V
    I have a fairly specific question. I had Ubuntu on my Laptop (for years). For a variety of reasons, I've had to switch to Windows but the computer has two hard drives. The main drive was reformatted and I've installed windows. The second hard drive still has the Linux system disk format (not sure on type). Obviously, windows can't access it but can I access it from a Virtual machine (VirtualBox) or will I need to load up a Live-Session to access / move the contents? Edit: If this is possible, how would one proceed to mount the disk?

    Read the article

  • Can't log in on boot up

    - by Jerry Donnelly
    I set this computer up with Ubuntu for my neighbor about two years ago. Today she tried her normal boot up and log in and her password isn't accepted. I've double checked and she's using what I set her up to use, the caps lock key is okay, and I can't see any other reason for the problem. I'm not sure exactly what version of Ubuntu she has and I'm not an expert user myself. Is there a way to bypass the password screen on boot up that would let me get to Ubuntu and perhaps set her up as another user? She basically checks email and that's about it. Thanks for any assistance.

    Read the article

  • Slow wifi with D-link DWA-160 A2

    - by Tommy Brunn
    I recently bought a USB wifi adapter for my new desktop computer. It's a D-link DWA-160 A2. From the start it didn't want to work at all, but after unplugging and then plugging it back in, it seems to work. However, my browsing is painfully slow. NetworkManager reports the connection to be at around 78-85% signal strength, which seems perfectly acceptable. Is there anything I can do to make it faster? I'm dual booting with Windows 7, where it seems to work fine, so I'm guessing that the problem occurs because of crappy drivers.

    Read the article

  • bash script move file to folders based in name

    - by user289111
    I hope you can help me... I made a perl and bash script to make a backup of my firewalls and tranfers via tftp #!/bin/sh perl /deploy/scripts/backups/10.160.23.1.pl > /dev/null 2>&1 perl /deploy/scripts/backups/10.160.23.2.pl > /dev/null 2>&1 so this tranfers the file to my tftp directory /tftpboot/ ls -l /tftpboot/ total 532 -rw-rw-rw- 1 tftp tftp 209977 jun 6 14:01 10.160.23.1_20140606.cfg -rw-rw-rw- 1 tftp tftp 329548 jun 6 14:02 10.160.23.2_20140606.cfg my questions is how to improve my script to moving this files dynamically to another folder based on the name (in this case on the ip address) for example: 10.160.23.1_20140606.cfg move to /deploy/backups/10.160.23.1/ is that the answer to this surely was on Google, but wanted to know if there was a particular solution to this request and also learn how to do :) Thanks!

    Read the article

  • What features of old computers helped you learn to be a better programmer?

    - by David Cary
    What features of old computers helped you learn to be a better programmer -- but don't seem to be available on new computers? I imagine that, while educational, you are really glad some features are gone, such as programs ran so slowly that I could almost see each pixel being plotted, so I got a visceral feel for the effect of various optimizations. I imagine other features you may be a little nostalgic for, such as I could turn on the computer, and write a short program that printed "Hello, World" on the printer, before ever "booting" a "disk". (I'm hoping that this is constructive enough to avoid the fate of the " What have we lost from computers 20 years ago ?" question).

    Read the article

  • How many developers before continuous integration becomes effective for us?

    - by Carnotaurus
    There is an overhead associated with continuous integration, e.g., set up, re-training, awareness activities, stoppage to fix "bugs" that turn out to be data issues, enforced separation of concerns programming styles, etc. At what point does continuous integration pay for itself? EDIT: These were my findings The set-up was CruiseControl.Net with Nant, reading from VSS or TFS. Here are a few reasons for failure, which have nothing to do with the setup: Cost of investigation: The time spent investigating whether a red light is due a genuine logical inconsistency in the code, data quality, or another source such as an infrastructure problem (e.g., a network issue, a timeout reading from source control, third party server is down, etc., etc.) Political costs over infrastructure: I considered performing an "infrastructure" check for each method in the test run. I had no solution to the timeout except to replace the build server. Red tape got in the way and there was no server replacement. Cost of fixing unit tests: A red light due to a data quality issue could be an indicator of a badly written unit test. So, data dependent unit tests were re-written to reduce the likelihood of a red light due to bad data. In many cases, necessary data was inserted into the test environment to be able to accurately run its unit tests. It makes sense to say that by making the data more robust then the test becomes more robust if it is dependent on this data. Of course, this worked well! Cost of coverage, i.e., writing unit tests for already existing code: There was the problem of unit test coverage. There were thousands of methods that had no unit tests. So, a sizeable amount of man days would be needed to create those. As this would be too difficult to provide a business case, it was decided that unit tests would be used for any new public method going forward. Those that did not have a unit test were termed 'potentially infra red'. An intestesting point here is that static methods were a moot point in how it would be possible to uniquely determine how a specific static method had failed. Cost of bespoke releases: Nant scripts only go so far. They are not that useful for, say, CMS dependent builds for EPiServer, CMS, or any UI oriented database deployment. These are the types of issues that occured on the build server for hourly test runs and overnight QA builds. I entertain that these to be unnecessary as a build master can perform these tasks manually at the time of release, esp., with a one man band and a small build. So, single step builds have not justified use of CI in my experience. What about the more complex, multistep builds? These can be a pain to build, especially without a Nant script. So, even having created one, these were no more successful. The costs of fixing the red light issues outweighed the benefits. Eventually, developers lost interest and questioned the validity of the red light. Having given it a fair try, I believe that CI is expensive and there is a lot of working around the edges instead of just getting the job done. It's more cost effective to employ experienced developers who do not make a mess of large projects than introduce and maintain an alarm system. This is the case even if those developers leave. It doesn't matter if a good developer leaves because processes that he follows would ensure that he writes requirement specs, design specs, sticks to the coding guidelines, and comments his code so that it is readable. All this is reviewed. If this is not happening then his team leader is not doing his job, which should be picked up by his manager and so on. For CI to work, it is not enough to just write unit tests, attempt to maintain full coverage, and ensure a working infrastructure for sizable systems. The bottom line: One might question whether fixing as many bugs before release is even desirable from a business prespective. CI involves a lot of work to capture a handful of bugs that the customer could identify in UAT or the company could get paid for fixing as part of a client service agreement when the warranty period expires anyway.

    Read the article

  • Why should most logic be in the monitor objects and not in the thread objects when writing concurrent software in Java?

    - by refuser
    When I took the Realtime and Concurrent programming course our lecturer told us that when writing concurrent programs in Java and using monitors, most of the logic should be in the monitor and as little as possible in the threads that access it. I never really understood why and I really would like to. Let me clarify. In this particular case we had several classes. Lift extends Thread Person extends Thread LiftView Monitor, all methods synchronized. This is nothing we came up with, our task was to implement a lift simulation with persons waiting on different floors, and theses were the class skeletons that were given. Then our lecturer said to implement most of the logic in the monitor (he was talking about class Monitor as THE monitor) and as little as possible in the threads. Why would he make a statement like that?

    Read the article

  • File system implementation in MongoDB with GridFS

    - by Ralph
    I am working on two projects that will both implement a Webdav server backed by a MongoDB GridFS. In each case, there is the potential for the system to store tens of millions of files spread across thousands of hierarchical directories. I can come up with two different ways of storing the directory structure: As a "true" hierarchical file system, with directories containing the IDs (_id) of subdirectories and regular files. The paths will be separated by slashes (/) as in a POSIX-compliant file system. The path /a/b/c will be represented as a directory a containing a directory b containing a file c. As a flat file system, where file names include the slashes. The path /a/b/c will be stored as a single file with the name /a/b/c What are the advantages and disadvantages of each, with respect to a "real" folder-based file system?

    Read the article

  • Starting my first project and have no idea about it. Guide please.

    - by Chankey Pathak
    I am a Computer Science student (6th semester). I want to make a project and I have a team of 4 people (My friends). So we are 5 people and we have decided to make a "Web based file explorer". The project will be similar to THIS one. How should we start with this project? We guys don't know much about programming. I know Java a little and I am a RHCE so can handle the server and all such administrative stuffs. Since this is our first project so we guys have no idea how we'll make it? I know Java and other guys in the group knows C#, ASP.NET, PHP, SQL and Joomla. Please guide and give your suggestions. Thank you. PS : Perhaps my question is not complete, if you want more information then leave a comment I will edit the question.

    Read the article

  • Need Help Unable to Mount Location

    - by Don't ASk Ubun
    I am not able to start Windows and am using a DVD copy of Ubuntu to start up. I see my 750 GB Hard Disk, but if I click it i get this error: Error mounting: mount exited with exit code 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. After googling for a while I think I need to do sudo apt-get install ntfsprogs but when I try that: E: Package 'ntfsprogs' has no installation candidate My problem is a lot like this thread

    Read the article

  • How do I explain the importance of NUNIT Test cases to my Colleagues [duplicate]

    - by JNL
    This question already has an answer here: How to explain the value of unit testing 6 answers I am currently working in Software Development for applications including lot of Mathematical Calculations. As a result there are lot of test cases that we need to consider. We donot have any NUNIT Test case system, I am wonderring how should I get the advantages of implementing the NUNIT testing in front of my colleagues and my boss. I am pretty sure, it would be of great help for our team. Any help regarding the same, will be higly appreciated.

    Read the article

  • Can't mount windows partition?

    - by C.J.
    When I try to open the Windows Partition from Ubuntu I receive the error: Unable to mount 55 GB Filesystem Error mounting: mount exited without exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x04010400 size: 1024 usa_ofs: 1026 usa_count: 1026: Invalid argument Record 6 has no FILE magic (0x4010400) Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sda2': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more detail. Additionally, I can't open the Windows Partition. I've tried updating it many times but it won't show up on GRUB. Does anybody know what all this means? And how I might fix it? I thank you for any help in advance

    Read the article

  • How to enable ping in windows firewall in windows server 2008 r2

    - by ybbest
    If you are unable ping your windows server 2008 r2 machine or if you have a “one way ping problem”. You need to check whether you have it enabled in your windows firewall.To enable it , you need to do the following: 1. You need to go to control panel >> windows firewall >> Advanced settings 2. Go to Inbound Rules and enable File and Printer Sharing (Echo Request – ICMPv4-In),after you have done this ,your computer will become pingable.

    Read the article

  • Using which technique does facebook and pininterest show images?

    - by manish
    If anybody has ever noticed that when you open a image in Facebook something like this happens:- suppose you are at your homepage on Facebook:- the URL is https://www.facebook.com/ now if you open a image it gets opened in new modal like window and URL changes to:- https://www.facebook.com/photo.php?fbid=10151125374887397&set=a.338008237396.161268.36922302396&type=1&theater As far as I know in any common case a modal overlay would have kept the url in the address bar the same , My question is how does facebook / pintrest achieve this behaviour of not re-loading the whole page but still achieving the change in the address bar. Is there any jquery or javascript plugin for this?

    Read the article

< Previous Page | 551 552 553 554 555 556 557 558 559 560 561 562  | Next Page >