Search Results

Search found 34853 results on 1395 pages for 'object layout'.

Page 460/1395 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Unable to access A class variables in B Class - Unity-Monodevelop

    - by Syed
    I have made a class including variables in Monodevelop which is: public class GridInfo : MonoBehaviour { public float initPosX; public float initPosY; public bool inUse; public int f; public int g; public int h; public GridInfo parent; public int y,x; } Now I am using its class variable in another class, Map.cs which is: public class Map : MonoBehaviour { public static GridInfo[,] Tile = new GridInfo[17, 23]; void Start() { Tile[0,0].initPosX = initPosX; //Line 49 } } Iam not getting any error on runtime, but when I play in unity it is giving me error NullReferenceException: Object reference not set to an instance of an object Map.Start () (at Assets/Scripts/Map.cs:49) I am not inserting this script in any gameobject, as Map.cs will make a GridInfo type array, I have also tried using variables using GetComponent, where is the problem ?

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Naming: objectAction or actionObject?

    - by DocSalvage
    The question, Stored procedure Naming conventions?, and Joel's excellent Making Wrong Code Look Wrong article come closest to addressing my question, but I'm looking for a more general set of criteria to use in deciding how to name modules containing code (classes, objects, methods, functions, widgets, or whatever). English (my only human language) is structured as action-object (i.e closeFile, openFile, saveFile) and since almost all computer languages are based on English, this is the most common convention. However, in trying to keep related code close together and still be able to find things, I've found object-action (i.e. fileClose, fileOpen, fileSave) to be very attractive. Quite a number of non-English human languages follow this structure as well. I doubt that one form is universally superior, but when should each be used in the pursuit of helping to make sure bad code looks bad?

    Read the article

  • Software Center doesn't open (elementary luna - ubuntu 12.04)

    - by zbiba
    When i try to open software center on elementary luna i get the bellow error.... ERROR:root:DebFileApplication import Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/init.py", line 4, in from debfile import DebFileApplication, DebFileOpenError File "/usr/share/software-center/softwarecenter/db/debfile.py", line 25, in from softwarecenter.db.application import Application, AppDetails File "/usr/share/software-center/softwarecenter/db/application.py", line 27, in import softwarecenter.distro File "/usr/share/software-center/softwarecenter/distro/init.py", line 198, in distro_instance = _get_distro() File "/usr/share/software-center/softwarecenter/distro/init.py", line 175, in _get_distro distro_class = getattr(module, distro_id) AttributeError: 'module' object has no attribute 'debian' Traceback (most recent call last): File "/usr/sbin/update-software-center", line 38, in from softwarecenter.db.update import rebuild_database File "/usr/share/software-center/softwarecenter/db/update.py", line 33, in from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/init.py", line 198, in distro_instance = _get_distro() File "/usr/share/software-center/softwarecenter/distro/init.py", line 175, in _get_distro distro_class = getattr(module, distro_id) AttributeError: 'module' object has no attribute 'debian'

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • PowerShell programming conventions

    - by Tahir Hassan
    Do you follow any any conventions when programming in PowerShell? For example, in scripts which are to be maintained long-term do you: Use the real cmdlet name or alias? Specify the cmdlet parameter name in full or only partially (dir -Recurse versus dir -r) When specifying string arguments for cmdlets do you enclose them in quotes (New-Object 'System.Int32' versus New-Object System.Int32 When writing functions and filters do you specify the types of parameters? Do you write cmdlets in the (official) correct case? For keywords like BEGIN...PROCESS...END do you write them in uppercase only? Thanks for any replies.

    Read the article

  • Understanding how texCUBE works and writing cubemaps properly into a cube rendertarget

    - by cubrman
    My goal is to create accurate reflections, sampled from a dynamic cubemap, for specific 3d objects (mostly lights) in XNA 4.0. To sample the cubemap I compute the 3d reflection vector in a classic way: half3 ReflectionVec = reflect(-directionToCamera, Normal.rgb); I then use the vector to get the actual reflected color: half3 ReflectionCol = texCUBElod(ReflectionSampler, float4(ReflectionVec, 0)); The cubemap I am sampling from is a RenderTarget with 6 flat faces. So my question is, given the 3d world position of an arbitrary 3d object, how can I make sure that I get accurate reflections of this object, when I re-render the cubemap. Should I build the ViewProjection matrix in a specific way? Or is there any other approach?

    Read the article

  • How to use OO for data analysis? [closed]

    - by Konsta
    In which ways could object-orientation (OO) make my data analysis more efficient and let me reuse more of my code? The data analysis can be broken up into get data (from db or csv or similar) transform data (filter, group/pivot, ...) display/plot (graph timeseries, create tables, etc.) I mostly use Python and its Pandas and Matplotlib packages for this besides some DB connectivity (SQL). Almost all of my code is a functional/procedural mix. While I have started to create a data object for a certain collection of time series, I wonder if there are OO design patterns/approaches for other parts of the process that might increase efficiency?

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • Generically correcting data before save with Entity Framework

    - by koevoeter
    Been working with Entity Framework (.NET 4.0) for a week now for a data migration job and needed some code that generically corrects string values in the database. You probably also have seen things like empty strings instead of NULL or non-trimmed texts ("United States       ") in "old" databases, and you don't want to apply a correcting function on every column you migrate. Here's how I've done this (extending the partial class of my ObjectContext):public partial class MyDatacontext{    partial void OnContextCreated()    {        SavingChanges += OnSavingChanges;    }     private void OnSavingChanges(object sender, EventArgs e)    {        foreach (var entity in GetPersistingEntities(sender))        {            foreach (var propertyInfo in GetStringProperties(entity))            {                var value = (string)propertyInfo.GetValue(entity, null);                 if (value == null)                {                    continue;                }                 if (value.Trim().Length == 0 && IsNullable(propertyInfo))                {                    propertyInfo.SetValue(entity, null, null);                }                else if (value != value.Trim())                {                    propertyInfo.SetValue(entity, value.Trim(), null);                }            }        }    }     private IEnumerable<object> GetPersistingEntities(object sender)    {        return ((ObjectContext)sender).ObjectStateManager            .GetObjectStateEntries(EntityState.Added | EntityState.Modified)             .Select(e => e.Entity);    }    private IEnumerable<PropertyInfo> GetStringProperties(object entity)    {        return entity.GetType().GetProperties()            .Where(pi => pi.PropertyType == typeof(string));    }    private bool IsNullable(PropertyInfo propertyInfo)    {        return ((EdmScalarPropertyAttribute)propertyInfo             .GetCustomAttributes(typeof(EdmScalarPropertyAttribute), false)            .Single()).IsNullable;    }}   Obviously you can use similar code for other generic corrections.

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • nVidia GeForce Go 7600? can it ever run unity?

    - by Khaled Musleh
    my laptop Toshiba Qosmio G30 has nVidia GeForce Go 7600 card and it suppose to support 3D . i run unity 2d now . I run 12.04 and the graphic driver is--VESA: G73 Board - toshg73m-- by UBUNTU. when i run /usr/lib/nux/unity_support_test -p then i get this list Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no the card is not blacklisted but a similar one with GT is! Do you think that there is a chance the laptop can run the unity 3d? and may be i could change the resolution of the screen to a higher one too! I tried all the nvidia drivers provided but none works (except 96 in ubuntu 12.04 ). i get a black screen or terminal screen. best wishes to all

    Read the article

  • Android Emulator - Ubuntu 12.04

    - by Aneesh karthik C
    When I type in the command @emulator Andreud where 'Andreud' is the name of the emulator I created. It gives the following errors and a blank screen in which I should get android home screen, icons, etc shows up. PVRDRIInitPVR2D: PVR2D device index (0)Failed to load libGL.so error libGL.so: cannot open shared object file: No such file or directory Failed to load libGL.so error libGL.so: cannot open shared object file: No such file or directory As per a comment I tried installing ia32-libs aneesh@nb14:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ia32-libs' has no installation candidate I want the home screen to appear. Any help is greatly appreciated.

    Read the article

  • Achieving more fluent movement

    - by Robin92
    I'm working on my first OpenGL 2D game and I've just locked the framerate of my game. However, the way objects move is far from satisfying: they tend to lag, which is shown in this video. I've thought how more fluent animation can be achieved and started getting segmentation faults due to accessing the same object by two different threads. I've tried the following threads' setting: Drawing, creating new objects Moving player, moving objects, deleting objects Currently my application uses this setting: Drawing, creating new objects, moving objects, deleting object Moving player Any ideas would be appreciated. EDIT: I've tried increasing the FPS limit but lags are noticeable even at 200 fps.

    Read the article

  • IsNullOrEmpty generic method for Array to avoid Re-Sharper warning

    - by Michael Freidgeim
    I’ve used the following extension method in many places. public static bool IsNullOrEmpty(this Object[] myArr) { return (myArr == null || myArr.Length == 0); }Recently I’ve noticed that Resharper shows warning covariant array conversion to object[] may cause an exception for the following codeObjectsOfMyClass.IsNullOrEmpty()I’ve resolved the issue by creating generic extension method public static bool IsNullOrEmpty<T>(this T[] myArr) { return (myArr == null || myArr.Length == 0); }Related linkshttp://connect.microsoft.com/VisualStudio/feedback/details/94089/add-isnullorempty-to-array-class    public static bool IsNullOrEmpty(this System.Collections.IEnumerable source)        {            if (source == null)                return true;            else            {                return !source.GetEnumerator().MoveNext();            }        }http://stackoverflow.com/questions/8560106/isnullorempty-equivalent-for-array-c-sharp

    Read the article

  • Powershell – script all objects on all databases to files

    - by Nigel Rivett
    <# This simple PowerShell routine scripts out all the user-defined functions, stored procedures, tables and views in all the databases on the server that you specify, to the path that you specify. SMO must be installed on the machine (it happens if SSMS is installed) To run - set the servername and path Open a command window and run powershell Copy the below into the window and press enter - it should run It will create the subfolders for the databases and objects if necessary. #> $path = “C:\Test\Script\" $ServerName = "MyServerNameOrIpAddress" [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $serverInstance = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName $IncludeTypes = @(“tables”,”StoredProcedures”,"Views","UserDefinedFunctions") $ExcludeSchemas = @(“sys”,”Information_Schema”) $so = new-object (‘Microsoft.SqlServer.Management.Smo.ScriptingOptions’) $so.IncludeIfNotExists = 0 $so.SchemaQualify = 1 $so.AllowSystemObjects = 0 $so.ScriptDrops = 0 #Script Drop Objects $dbs=$serverInstance.Databases foreach ($db in $dbs) { $dbname = "$db".replace("[","").replace("]","") $dbpath = "$path"+"$dbname" + "\" if ( !(Test-Path $dbpath)) {$null=new-item -type directory -name "$dbname"-path "$path"} foreach ($Type in $IncludeTypes) { $objpath = "$dbpath" + "$Type" + "\" if ( !(Test-Path $objpath)) {$null=new-item -type directory -name "$Type"-path "$dbpath"} foreach ($objs in $db.$Type) { If ($ExcludeSchemas -notcontains $objs.Schema ) { $ObjName = "$objs".replace("[","").replace("]","") $OutFile = "$objpath" + "$ObjName" + ".sql" $objs.Script($so)+"GO" | out-File $OutFile #-Append } } } }

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Updating a Progress Bar within the Status Bar [migrated]

    - by Muhnamana
    I'm trying to figure out how to incorporate a progress bar within the status bar to show how much processing is completed. Below is my example of updating the progress bar (not sure if this is the correct way or not) Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick ToolStripProgressBar1.Value = ToolStripProgressBar1.Value + 2 If ToolStripProgressBar1.Value = 100 Then ToolStripProgressBar1.Value = 0 ToolStripProgressBar1.Value = ToolStripProgressBar1.Value + 2 Timer1.Enabled = True End If End Sub Here is the code within the button. Private Sub Button1_Click(sender As System.Object, e As System.EventArgs) Handles Button1Run.Click ToolStripStatusLabel1.Text = "Processing..." Timer1.Enabled = True 'more code to be inserted here End Sub What I'm not sure is how to update the progress bar based on the amount of code you have and once the processing is complete, update the ToolStripStatusLabel1 to show "Processing...Complete!".

    Read the article

  • vector collision on polygon in 3d space detection/testing?

    - by LRFLEW
    In the 3d fps in java I'm working on, I need a bullet to be fired and to tell if it hit someone. All visual objects in the game are defined through OpenGL, so the object it can be colliding with can be any drawable polygon (although they will most likely be triangles and rectangles anyways). The bullet is not an object, but will be treated as a vector that instantaneously moves all the way across the map (like the snipper riffle in Halo). What's the best way to detect/test collisions with the polygon and the vector. I have access to OpenCL, however I have absolutely no experience with it. I am very early in the developmental stage, so if you think there's a better way of going about this, feel free to tell me (I barley have a player model to collide with anyways, so I'm flexible with it). Thanks

    Read the article

  • Would like some help in understanding rendering geometry vs textures

    - by Anon
    So I was just pondering whether it is more taxing on the GPU to render geometry or a texture. What I'm trying to see is whether there is a huge difference in rendering two scenes with the same setup: Scene 1: Example Object: A dirt road (nothing else) Geometry: Detailed road, with all the bumps, cracks and so forth done in the mesh Scene 2: Example Object: A dirt road (nothing else) Geometry: A simple mesh, in a form of a road, but in this case maps and textures are simulating cracks, bumps, etc... So of these two, which one is likely to tax the hardware more? Or is it not a like for like comparison? What would be the best way of doing something like this? Go heavy on the textures? Or have a blend of both?

    Read the article

  • WCF web service with Neural Network

    - by Gary Frank
    I am developing a web service that performs object recognition. It will be available for testing as soon as enough code has been developed, and then officially when it is finished. It is based on a radically new type of artificial neural network that I designed. Its goal is to recognize any type of object within an image. Besides the WCF web service, the project will also create a website to test and demonstrate the web service. Here is a link with more information. http://www.indiegogo.com/VOR

    Read the article

  • Circle-Rectangle collision in a tile map game

    - by furiousd
    I am making a 2D tile map based putt-putt game. I have collision detection working between the ball and the walls of the map, although when the ball collides at the meeting point between 2 tiles I offset it by 0.5 so that it doesn't get stuck in the wall. This aint a huge issue though. if(y % 20 == 0) { y+=0.5; } if(x % 20 == 0) { x+=0.5; } Collisions work as follows Find the closest point between each tile and the center of the ball If distance(ball_x, ball_y, close_x, close_y) <= ball_radius and the closest point belongs to a solid object, collision has occured Invert X/Y speed according to side of object collided with The next thing I tried to do was implement floating blocks in the middle of the map for the ball to bounce off of. When a ball collides with a corner of the block, it gets stuck in it. So I changed my determineRebound() function to treat corners as if they were circles. Here's that functon: `i and j are indexes of the solid object in the 2d map array. x & y are centre point of ball.` void determineRebound(int _i, int _j) { if(y > _i*tile_w && y < _i*tile_w + tile_w) { //Not a corner xs*=-1; } else if(x > _j*tile_w && x < _j*tile_w + tile_w) { //Not a corner ys*=-1; } else { //Corner float nx = x - close_x; float ny = y - close_y; float len = sqrt(nx * nx + ny * ny); nx /= len; ny /= len; float projection = xs * nx + ys * ny; xs -= 2 * projection * nx; ys -= 2 * projection * ny; } } This is where things have gotten messy. Collisions with 'floating' corners work fine, but now when the ball collides near the meeting point of 2 tiles, it detects a corner collision and does not rebound as expected. I'm a bit in over my head at this point. I guess I'm wondering if I'm going about making this sort of game in the right way. Is a 2d tile map the way to go? If so, is there a problem with my collision logic and where am I going wrong? Any advice/feedback would be great.

    Read the article

  • Having a generic data type for a database table column, is it "good" practice?

    - by Yanick Rochon
    I'm working on a PHP project where some object (class member) may contain different data type. For example : class Property { private $_id; // (PK) private $_ref_id; // the object reference id (FK) private $_name; // the name of the property private $_type; // 'string', 'int', 'float(n,m)', 'datetime', etc. private $_data; // ... // ..snip.. public getters/setters } Now, I need to perform some persistence on these objects. Some properties may be a text data type, but nothing bigger than what a varchar may hold. Also, later on, I need to be able to perform searches and sorting. Is it a good practice to use a single database table for this (ie. is there a non negligible performance impact)? If it's "acceptable", then what could be the data type for the data column?

    Read the article

  • The performance implications of IEnumerable vs. IQueryable

    It all started innocently enough. I was implementing a "Older Posts/Newer Posts" feature for my new web site and was writing code like this:IEnumerable<Post> FilterByCategory(IEnumerable<Post> posts, string category) {  if( !string.IsNullOrEmpty(category) ) { return posts.Where(p => p.Category.Contains(category)); }}...  var posts = FilterByCategory(db.Posts, category);  int count = posts.Count();... The "db" was an EF object context object, but it could just as...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >