Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 189/443 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Which workaround to use for the following SQL deadlock?

    - by Marko
    I found a SQL deadlock scenario in my application during concurrency. I belive that the two statements that cause the deadlock are (note - I'm using LINQ2SQL and DataContext.ExecuteCommand(), that's where this.studioId.ToString() comes into play): exec sp_executesql N'INSERT INTO HQ.dbo.SynchronizingRows ([StudioId], [UpdatedRowId]) SELECT @p0, [t0].[Id] FROM [dbo].[UpdatedRows] AS [t0] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [dbo].[ReceivedUpdatedRows] AS [t1] WHERE ([t1].[StudioId] = @p0) AND ([t1].[UpdatedRowId] = [t0].[Id]) ))',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; and exec sp_executesql N'INSERT INTO HQ.dbo.ReceivedUpdatedRows ([UpdatedRowId], [StudioId], [ReceiveDateTime]) SELECT [t0].[UpdatedRowId], @p0, GETDATE() FROM [dbo].[SynchronizingRows] AS [t0] WHERE ([t0].[StudioId] = @p0)',N'@p0 uniqueidentifier',@p0='" + this.studioId.ToString() + "'; The basic logic of my (client-server) application is this: Every time someone inserts or updates a row on the server side, I also insert a row into the table UpdatedRows, specifying the RowId of the modified row. When a client tries to synchronize data, it first copies all of the rows in the UpdatedRows table, that don't contain a reference row for the specific client in the table ReceivedUpdatedRows, to the table SynchronizingRows (the first statement taking part in the deadlock). Afterwards, during the synchronization I look for modified rows via lookup of the SynchronizingRows table. This step is required, otherwise if someone inserts new rows or modifies rows on the server side during synchronization I will miss them and won't get them during the next synchronization (explanation scenario to long to write here...). Once synchronization is complete, I insert rows to the ReceivedUpdatedRows table specifying that this client has received the UpdatedRows contained in the SynchronizingRows table (the second statement taking part in the deadlock). Finally I delete all rows from the SynchronizingRows table that belong to the current client. The way I see it, the deadlock is occuring on tables SynchronizingRows (abbreviation SR) and ReceivedUpdatedRows (abbreviation RUR) during steps 2 and 3 (one client is in step 2 and is inserting into SR and selecting from RUR; while another client is in step 3 inserting into RUR and selecting from SR). I googled a bit about SQL deadlocks and came to a conclusion that I have three options. Inorder to make a decision I need more input about each option/workaround: Workaround 1: The first advice given on the web about SQL deadlocks - restructure tables/queries so that deadlocks don't happen in the first place. Only problem with this is that with my IQ I don't see a way to do the synchronization logic any differently. If someone wishes to dwelve deeper into my current synchronization logic, how and why it is set up the way it is, I'll post a link for the explanation. Perhaps, with the help of someone smarter than me, it's possible to create a logic that is deadlock free. Workaround 2: The second most common advice seems to be the use of WITH(NOLOCK) hint. The problem with this is that NOLOCK might miss or duplicate some rows. Duplication is not a problem, but missing rows is catastrophic! Another option is the WITH(READPAST) hint. On the face of it, this seems to be a perfect solution. I really don't care about rows that other clients are inserting/modifying, because each row belongs only to a specific client, so I may very well skip locked rows. But the MSDN documentaion makes me a bit worried - "When READPAST is specified, both row-level and page-level locks are skipped". As I said, row-level locks would not be a problem, but page-level locks may very well be, since a page might contain rows that belong to multiple clients (including the current one). While there are lots of blog posts specifically mentioning that NOLOCK might miss rows, there seems to be none about READPAST (never) missing rows. This makes me skeptical and nervous to implement it, since there is no easy way to test it (implementing would be a piece of cake, just pop WITH(READPAST) into both statements SELECT clause and job done). Can someone confirm whether the READPAST hint can miss rows? Workaround 3: The final option is to use ALLOW_SNAPSHOT_ISOLATION and READ_COMMITED_SNAPSHOT. This would seem to be the only option to work 100% - at least I can't find any information that would contradict with it. But it is a little bit trickier to setup (I don't care much about the performance hit), because I'm using LINQ. Off the top of my head I probably need to manually open a SQL connection and pass it to the LINQ2SQL DataContext, etc... I haven't looked into the specifics very deeply. Mostly I would prefer option 2 if somone could only reassure me that READPAST will never miss rows concerning the current client (as I said before, each client has and only ever deals with it's own set of rows). Otherwise I'll likely have to implement option 3, since option 1 is probably impossible... I'll post the table definitions for the three tables as well, just in case: CREATE TABLE [dbo].[UpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY CLUSTERED, [RowId] [uniqueidentifier] NOT NULL, [UpdateDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX IX_RowId ON dbo.UpdatedRows ([RowId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[ReceivedUpdatedRows]( [Id] [uniqueidentifier] NOT NULL ROWGUIDCOL DEFAULT NEWSEQUENTIALID() PRIMARY KEY NONCLUSTERED, [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]), [StudioId] [uniqueidentifier] NOT NULL REFERENCES, [ReceiveDateTime] [datetime] NOT NULL, ) ON [PRIMARY] GO CREATE CLUSTERED INDEX IX_Studios ON dbo.ReceivedUpdatedRows ([StudioId] ASC) WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[SynchronizingRows]( [StudioId] [uniqueidentifier] NOT NULL [UpdatedRowId] [uniqueidentifier] NOT NULL REFERENCES [dbo].[UpdatedRows] ([Id]) PRIMARY KEY CLUSTERED ([StudioId], [UpdatedRowId]) ) ON [PRIMARY] GO PS! Studio = Client. PS2! I just noticed that the index definitions have ALLOW_PAGE_LOCK=ON. If I would turn it off, would that make any difference to READPAST? Are there any negative downsides for turning it off?

    Read the article

  • Simple 'database' in c++

    - by DevAno1
    Hello. My task was to create pseudodatabase in c++. There are 3 tables given, that store name(char*), age(int), and sex (bool). Write a program allowing to : - add new data to the tables - show all records - sort tables with criteria : - name increasing/decreasing - age increasing/decreasing - sex Using function templates is a must. Also size of arrays must be variable, depending on the amount of records. I have some code but there are still problems there. Here's what I have: Function tabSize() for returning size of array. But currently it returns size of pointer I guess : #include <iostream> using namespace std; template<typename TYPE> int tabSize(TYPE *T) { int size = 0; size = sizeof(T) / sizeof(T[0]); return size; } How to make it return size of array, not its pointer ? Next the most important : add() for adding new elements. Inside first I get the size of array (but hence it returns value of pointer, and not size it's of no use now :/). Then I think I must check if TYPE of data is char. Or am I wrong ? // add(newElement, table) template<typename TYPE> TYPE add(TYPE L, TYPE *T) { int s = tabSize(T); //here check if TYPE = char. If yes, get the length of the new name int len = 0; while (L[len] != '\0') { len++; } //current length of table int tabLen = 0; while (T[tabLen] != '\0') { tabLen++; } //if TYPE is char //if current length of table + length of new element exceeds table size create new table if(len + tabLen > s) { int newLen = len + tabLen; TYPE newTab = new [newLen]; for(int j=0; j < newLen; j++ ){ if(j == tabLen -1){ for(int k = 0; k < len; k++){ newTab[k] = } } else { newTab[j] = T[j]; } } } //else check if tabLen + 1 is greater than s. If yes enlarge table by 1. } Am I thinking correct here ? Last functions show() is correct I guess : template<typename TYPE> TYPE show(TYPE *L) { int len = 0; while (L[len] == '\0') { len++; } for(int i=0; i<len; i++) { cout << L[i] << endl; } } and problem with sort() is as follows : Ho can I influence if sorting is decreasing or increasing ? I'm using bubble sort here. template<typename TYPE> TYPE sort(TYPE *L, int sort) { int s = tabSize(L); int len = 0; while (L[len] == '\0') { len++; } //add control increasing/decreasing sort int i,j; for(i=0;i<len;i++) { for(j=0;j<i;j++) { if(L[i]>L[j]) { int temp=L[i]; L[i]=L[j]; L[j]=temp; } } } } And main function to run it : int main() { int sort=0; //0 increasing, 1 decreasing char * name[100]; int age[10]; bool sex[10]; char c[] = "Tom"; name[0] = "John"; name[1] = "Mike"; cout << add(c, name) << endl; system("pause"); return 0; }

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • LLBLGen Pro v3.0 with Entity Framework v4.0 (12m video)

    - by FransBouma
    Today I recorded a video in which I illustrate some of the database-first functionality available in LLBLGen Pro v3.0. LLBLGen Pro v3.0 also supports model-first functionality, which I hope to illustrate in an upcoming video. LLBLGen Pro v3.0 is currently in beta and is scheduled to RTM some time in May 2010. It supports the following frameworks out of the box, with more scheduled to follow in the coming year: LLBLGen Pro RTL (our own o/r mapper framework), Linq to Sql, NHibernate and Entity Framework (v1 and v4). The video I linked to below illustrates the creation of an entity model for Entity Framework v4, by reverse engineering the SQL Server 2008 example database 'AdventureWorks'. The following topics (among others) are included in the video: Abbreviation support (example: convert 'Qty' into 'Quantity' during name construction) Flexible, framework specific settings Attribute definitions for various elements (so no requirement for buddy-classes or messing with generated code or templates) Retrieval of relational model data from a database Reverse engineering of tables into entities, automatically placed in groups Auto-creation of inheritance hierarchies Refactoring of entity fields into Value Type Definitions (DDD) Mapping a Typed view onto a stored procedure resultset Creation of a Typed list (definition of a query with a projection) on a set of related entities Validation and correction of found inconsistencies and errors Generating code using one of the pre-defined presets Illustration of the code in vs.net 2010 It also gives a good overview of what it takes with LLBLGen Pro v3.0 to start from a new project, point it to a database, get an entity model, perform tweaks and validation and generate code which is ready to run. I am no video recording expert so there's no audio and some mouse movements might be a little too quickly. If that's the case, please pause the video. It's rather big (52MB). Click here to open the HTML page with the video (Flash). Opens in a new window. LLBLGen Pro v3.0 is currently in beta (available for v2.x customers) and scheduled to be released somewhere in May 2010.

    Read the article

  • Password Security: Short and Complex versus ‘Short or Lengthy’ and Less Complex

    - by Akemi Iwaya
    Creating secure passwords for our online accounts is a necessary evil due to the huge increase in database and account hacking that occurs these days. The problem though is that no two companies have a similar policy for complex and secure password creation, then factor in the continued creation of insecure passwords or multi-site use of the same password and trouble is just waiting to happen. Ars Technica decided to take a look at multiple password types, how users fared with them, and how well those password types held up to cracking attempts in their latest study. The password types that Ars Technica looked at were comprehensive8, basic8, and basic16. The comprehensive type required a variety of upper-case, lower-case, digits, and symbols with no dictionary words allowed. The only restriction on the two basic types was the number of characters used. Which type do you think was easier for users to adopt and did better in the two password cracking tests? You can learn more about how well users did with the three password types and the results of the tests by visiting the article linked below. What are your thoughts on the matter? Are shorter, more complex passwords better or worse than using short or long, but less complex passwords? What methods do you feel work best since most passwords are limited to approximately 16 characters in length? Perhaps you use a service like LastPass or keep a dedicated list/notebook to manage your passwords. Let us know in the comments!    

    Read the article

  • Complete Guide to Symbolic Links (symlinks) on Windows or Linux

    - by Matthew Guay
    Want to easily access folders and files from different folders without maintaining duplicate copies?  Here’s how you can use Symbolic Links to link anything in Windows 7, Vista, XP, and Ubuntu. So What Are Symbolic Links Anyway? Symbolic links, otherwise known as symlinks, are basically advanced shortcuts. You can create symbolic links to individual files or folders, and then these will appear like they are stored in the folder with the symbolic link even though the symbolic link only points to their real location. There are two types of symbolic links: hard and soft. Soft symbolic links work essentially the same as a standard shortcut.  When you open a soft link, you will be redirected to the folder where the files are stored.  However, a hard link makes it appear as though the file or folder actually exists at the location of the symbolic link, and your applications won’t know any different. Thus, hard links are of the most interest in this article. Why should I use Symbolic Links? There are many things we use symbolic links for, so here’s some of the top uses we can think of: Sync any folder with Dropbox – say, sync your Pidgin Profile Across Computers Move the settings folder for any program from its original location Store your Music/Pictures/Videos on a second hard drive, but make them show up in your standard Music/Pictures/Videos folders so they’ll be detected my your media programs (Windows 7 Libraries can also be good for this) Keep important files accessible from multiple locations And more! If you want to move files to a different drive or folder and then symbolically link them, follow these steps: Close any programs that may be accessing that file or folder Move the file or folder to the new desired location Follow the correct instructions below for your operating system to create the symbolic link. Caution: Make sure to never create a symbolic link inside of a symbolic link. For instance, don’t create a symbolic link to a file that’s contained in a symbolic linked folder. This can create a loop, which can cause millions of problems you don’t want to deal with. Seriously. Create Symlinks in Any Edition of Windows in Explorer Creating symlinks is usually difficult, but thanks to the free Link Shell Extension, you can create symbolic links in all modern version of Windows pain-free.  You need to download both Visual Studio 2005 redistributable, which contains the necessary prerequisites, and Link Shell Extension itself (links below).  Download the correct version (32 bit or 64 bit) for your computer. Run and install the Visual Studio 2005 Redistributable installer first. Then install the Link Shell Extension on your computer. Your taskbar will temporally disappear during the install, but will quickly come back. Now you’re ready to start creating symbolic links.  Browse to the folder or file you want to create a symbolic link from.  Right-click the folder or file and select Pick Link Source. To create your symlink, right-click in the folder you wish to save the symbolic link, select “Drop as…”, and then choose the type of link you want.  You can choose from several different options here; we chose the Hardlink Clone.  This will create a hard link to the file or folder we selected.  The Symbolic link option creates a soft link, while the smart copy will fully copy a folder containing symbolic links without breaking them.  These options can be useful as well.   Here’s our hard-linked folder on our desktop.  Notice that the folder looks like its contents are stored in Desktop\Downloads, when they are actually stored in C:\Users\Matthew\Desktop\Downloads.  Also, when links are created with the Link Shell Extension, they have a red arrow on them so you can still differentiate them. And, this works the same way in XP as well. Symlinks via Command Prompt Or, for geeks who prefer working via command line, here’s how you can create symlinks in Command Prompt in Windows 7/Vista and XP. In Windows 7/Vista In Windows Vista and 7, we’ll use the mklink command to create symbolic links.  To use it, we have to open an administrator Command Prompt.  Enter “command” in your start menu search, right-click on Command Prompt, and select “Run as administrator”. To create a symbolic link, we need to enter the following in command prompt: mklink /prefix link_path file/folder_path First, choose the correct prefix.  Mklink can create several types of links, including the following: /D – creates a soft symbolic link, which is similar to a standard folder or file shortcut in Windows.  This is the default option, and mklink will use it if you do not enter a prefix. /H – creates a hard link to a file /J – creates a hard link to a directory or folder So, once you’ve chosen the correct prefix, you need to enter the path you want for the symbolic link, and the path to the original file or folder.  For example, if I wanted a folder in my Dropbox folder to appear like it was also stored in my desktop, I would enter the following: mklink /J C:\Users\Matthew\Desktop\Dropbox C:\Users\Matthew\Documents\Dropbox Note that the first path was to the symbolic folder I wanted to create, while the second path was to the real folder. Here, in this command prompt screenshot, you can see that I created a symbolic link of my Music folder to my desktop.   And here’s how it looks in Explorer.  Note that all of my music is “really” stored in C:\Users\Matthew\Music, but here it looks like it is stored in C:\Users\Matthew\Desktop\Music. If your path has any spaces in it, you need to place quotes around it.  Note also that the link can have a different name than the file it links to.  For example, here I’m going to create a symbolic link to a document on my desktop: mklink /H “C:\Users\Matthew\Desktop\ebook.pdf”  “C:\Users\Matthew\Downloads\Before You Call Tech Support.pdf” Don’t forget the syntax: mklink /prefix link_path Target_file/folder_path In Windows XP Windows XP doesn’t include built-in command prompt support for symbolic links, but we can use the free Junction tool instead.  Download Junction (link below), and unzip the folder.  Now open Command Prompt (click Start, select All Programs, then Accessories, and select Command Prompt), and enter cd followed by the path of the folder where you saved Junction. Junction only creates hard symbolic links, since you can use shortcuts for soft ones.  To create a hard symlink, we need to enter the following in command prompt: junction –s link_path file/folder_path As with mklink in Windows 7 or Vista, if your file/folder path has spaces in it make sure to put quotes around your paths.  Also, as usual, your symlink can have a different name that the file/folder it points to. Here, we’re going to create a symbolic link to our My Music folder on the desktop.  We entered: junction -s “C:\Documents and Settings\Administrator\Desktop\Music” “C:\Documents and Settings\Administrator\My Documents\My Music” And here’s the contents of our symlink.  Note that the path looks like these files are stored in a Music folder directly on the Desktop, when they are actually stored in My Documents\My Music.  Once again, this works with both folders and individual files. Please Note: Junction would work the same in Windows 7 or Vista, but since they include a built-in symbolic link tool we found it better to use it on those versions of Windows. Symlinks in Ubuntu Unix-based operating systems have supported symbolic links since their inception, so it is straightforward to create symbolic links in Linux distros such as Ubuntu.  There’s no graphical way to create them like the Link Shell Extension for Windows, so we’ll just do it in Terminal. Open terminal (open the Applications menu, select Accessories, and then click Terminal), and enter the following: ln –s file/folder_path link_path Note that this is opposite of the Windows commands; you put the source for the link first, and then the path second. For example, let’s create a symbolic link of our Pictures folder in our Desktop.  To do this, we entered: ln -s /home/maguay/Pictures /home/maguay/Desktop   Once again, here is the contents of our symlink folder.  The pictures look as if they’re stored directly in a Pictures folder on the Desktop, but they are actually stored in maguay\Pictures. Delete Symlinks Removing symbolic links is very simple – just delete the link!  Most of the command line utilities offer a way to delete a symbolic link via command prompt, but you don’t need to go to the trouble.   Conclusion Symbolic links can be very handy, and we use them constantly to help us stay organized and keep our hard drives from overflowing.  Let us know how you use symbolic links on your computers! Download Link Shell Extension for Windows 7, Vista, and XP Download Junction for XP Similar Articles Productive Geek Tips Using Symlinks in Windows VistaHow To Figure Out Your PC’s Host Name From the Command PromptInstall IceWM on Ubuntu LinuxAdd Color Coding to Windows 7 Media Center Program GuideSync Your Pidgin Profile Across Multiple PCs with Dropbox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Convert OpenGL code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run into some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it right before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has different coordinations systems, and functions that alters how you must implement to code a bit. The getPickRay function on the answer linked is what I'm trying to convert. This is the part of my code that I think is giving me trouble when converting from openGl to directX Because I'm unsure on how their different coordination systems differs from one another. PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); Another thing that I am unsure about is this part: XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom right (800,600) ( or whatever resolution you would have) The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH. I removed the variables aspectRatio and zoomFactor because I assumed that they were related to some specific function of his game. To summarize it up to questions : Does the openGL coordination system differ in such a way that this equation in the first of my code examples wouldn't be valid when used in DirectX 11 ( with its respective screen coordination system)? Is the openGL method Matrix4f.transform(a, b, c) equal to the directX method c = XMVector3TransformCoord(b,a)? (where a is a matrix and b,c are vectors). Because I know when it comes to matrices order is important.

    Read the article

  • What is JavaScript, really?

    - by Lord Loh.
    All this started when I was looking for a way to test my webpage for JavaScript conformance like the W3C HTML Validator. I have not found one yet. So let me know if you know of any... I looked for the official JavaScript page and find ECMA Script. These people have standardized a scripting language (I do not feel like calling it JavaScript anymore!) and called it ECMA-262 (Wikipedia). Their latest work is Edition 5.1 JavaScript was developed my Mozilla Corporation and their last stable version is 1.8.5 (see this) which is based on the ECMA's edition 5.1 The Wikipedia page linked mentions dialects. Mozilla's JavaScript 1.8.5 is listed as a dialect along with JScript 9 (IE) and JavaScript (Chrome's V8[Wiki]) and a lot others. Am I to understand that JavaScript 1.8.5 is a derivative of the ECMA-262 and SpiderMonkey[Wiki] is an engine that runs it? And Chrome has its own dialect and V8 engine is the program that runs it? With all these dialects based off ECMA-262, what I can no longer understand is "What is JavaScript"? Are there any truly cross browser scripting languages? Do the various implementers come together to agree on the dialect cross compatibility? Is this effort ECMA?

    Read the article

  • Can I use `UnlockCommercialFeatures` for developing Java applications without a commercial license?

    - by nondescript1
    As of Java 7 Update 40, Oracle is now including Java Mission Control in the JDK. Being always interested in a new profiling tool, I decided to check it out. However, trying to start Flight Recorder against a process, I get the following error, Now I'm getting cold feet about adding the JVM option -XX:+UnlockCommercialFeatures. I would only use this for profiling in development and not in production. From the article linked above, JMC is available under the Oracle Binary Code License for Java. The license allows you to use JMC for free during development and testing, though a different (paid for) licence is required for production use. I'm still leery about this. From Java SE Products, Flight Recorder certainly is a commercial feature; however, I find it very confusing that it's now included in the standard JDK release. Anyone else have a read on this? Clearly nothing here is legally binding and your legal department should be consulted. Reference: Oracle Binary Code License Agreement for the Java SE Platform Products and JavaFX

    Read the article

  • Objective-C As A First OOP Language?

    - by Daniel Scocco
    I am just finishing the second semester of my CS degree. So far I learned C, all the fundamental algorithms and data structures (e.g., searching, sorting, linked lists, heaps, hash tables, trees, graphs, etc). Next year we'll start with OOP, using either Java or C++. Recently I got some ideas for some iPhone apps and got itchy to start working on them. However I heard some bad things about Objectice-C in the past, so I am wondering if learning it as my first OOP language could be a problem. Not to mention that I think it will be hard to find books/online courses that teach basic OOP concepts using Objective-C to illustrate the concepts (as opposed to books using Java or C++, which are plenty), so this could be another problem. In summary: should I start learning Objective-C and OOP concepts right now by my own, or wait one more semester until I learn Java/C++ at university and then jump into Objective-C? Update: For those interested in getting started with OOP via Objective-C I just found some nice tutorials inside Apple's Developer Library - http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/OOP_ObjC/Introduction/Introduction.html

    Read the article

  • Gnome 3 freezes on logon on samsung RV 509

    - by Noufal
    I have a Samsung NP-RV509 A0FIN and I tried to install GNU/Linux with gnome 3.2 on it. I tried Fedora 16, Ubuntu 11.10 and Linux Mint 12 RC, but with no success. All of these freezes upon login into gnome shell. I think it is the problem with graphics driver, so I tried xorg-edgers ppa on my last installation, ie., Linux Mint. I also tried various intel graphics packages listed on Synaptic package manager, but no success again. My device configuration is as follows(obtained from windows 7): More details about my computer Component Details Subscore Base score Processor Intel(R) Pentium(R) CPU P6200 @ 2.13GHz 5.6 4.6 Memory (RAM) 4.00 GB 7.2 Graphics Intel(R) HD Graphics 4.6 Gaming graphics 1562 MB Total available graphics memory 5.2 Primary hard disk 12GB Free (50GB Total) 5.9 Windows 7 Ultimate System -------------------------------------------------------------------------------- Manufacturer SAMSUNG ELECTRONICS CO., LTD. Model RV409/RV509/RV709 Total amount of system memory 4.00 GB RAM System type 32-bit operating system Number of processor cores 2 64-bit capable Yes Storage -------------------------------------------------------------------------------- Total size of hard disk(s) 418 GB Disk partition (C:) 12 GB Free (50 GB Total) Media drive (D:) CD/DVD Disk partition (E:) 526 MB Free (191 GB Total) Disk partition (F:) 101 GB Free (177 GB Total) Graphics -------------------------------------------------------------------------------- Display adapter type Intel(R) HD Graphics Total available graphics memory 1562 MB Dedicated graphics memory 64 MB Dedicated system memory 0 MB Shared system memory 1498 MB Display adapter driver version 8.15.10.2202 Primary monitor resolution 1366x768 DirectX version DirectX 10 Network -------------------------------------------------------------------------------- Network Adapter Realtek PCIe GBE Family Controller Network Adapter Broadcom 802.11n Network Adapter Network Adapter Microsoft Virtual WiFi Miniport Adapter Notes -------------------------------------------------------------------------------- The gaming graphics score is based on the primary graphics adapter. If this system has linked or multiple graphics adapters, some software applications may see additional performance benefits. Any help is appreciated, and thanks in advance.

    Read the article

  • BizTalk 2009 - The Scope of the Table Looping Functoid

    - by StuartBrierley
    When mapping in BizTalk you will find there are times when you need to map from flat and dispersed elemements in your source schema to a repeated record with child elements in your destination schema.  Below is an example of how you can make use of the Table Looping Functoid to bring together these flat elements and create your repeated group.  Although this example is purposely simple, I have previsouly encounted this issue on a much more complex scale when mapping the response from a credit scoring agency where all the applicant details were supplied in separate parts of a very flat schema. Consider the source and destination schemas as follows:   Although the Table Looping Functoid states that the first input must be a scoping element linked from a repeating group, you can actually also make use of a constant value.  In this case I know that the source schema always contains two people, so I set this to two. Then you need to set the number of columns in your table, in this case 2 (name and sex) and link all the required fields from the source schema. Following this you can configure the table. You can then add the Table Extractor functoids and complete the map. If you now validate this map you will see that BizTalk will warn you about the scoping link for the Table Looping Functoid, but this can be safely ignored. C:\Code\Developer Folders\Stuart Brierley\Test Mapping\TableLooping.btm: warning btm1071: A first input of the Table-Looping functoid must be a link from a Source Tree Node which acts as the scoping parameter. Testing the map will produce the following output:

    Read the article

  • When the canonical page itself changes url

    - by lulalala
    This is a continuation of the question: How to handle canonical url changes like Stack Overflow. Say I have the canon url: questions/11/car <---canonically-linked-from--- questions/11/ What will happen if I want to change the canon url to questions/11/car-with-sgx Obviously, questions/11/ will point to the new canon url. But how should the old questions/11/car change to the new one? There are two ways: 301 redirect that to new canon url the old canon url canonically link to the new canon url According to this post: [By using canonical link instead of redirect,] OldPage.html’s rankings will drop over time due to fewer internal links, but the canonical tag won’t make it disappear entirely. It could theoretically remain in their index until one of the following occurs: it is redirected permanently via 301 it returns a 404 for an extended period of time (they will keep checking for a while before dropping a URL) a meta robots “noindex” tag is added If this is true, I really need to use redirect from old canon url to the new canon url, which means I need to keep a log of previous old canon urls of this content, so I know when I can redirect. This is a bit of a hassle to do.

    Read the article

  • Need some advice regarding collision detection with the sprite changing its width and height

    - by Frank Scott
    So I'm messing around with collision detection in my tile-based game and everything works fine and dandy using this method. However, now I am trying to implement sprite sheets so my character can have a walking and jumping animation. For one, I'd like to to be able to have each frame of variable size, I think. I want collision detection to be accurate and during a jumping animation the sprite's height will be shorter (because of the calves meeting the hamstrings). Again, this also works fine at the moment. I can get the character to animate properly each frame and cycle through animations. The problems arise when the width and height of the character change. Often times its position will be corrected by the collision detection system and the character will be rubber-banded to random parts of the map or even go outside the map bounds. For some reason with the linked collision detection algorithm, when the width or height of the sprite is changed on the fly, the entire algorithm breaks down. The solution I found so far is to have a single width and height of the sprite that remains constant, and only adjust the source rectangle for drawing. However, I'm not sure exactly what to set as the sprite's constant bounding box because it varies so much with the different animations. So now I'm not sure what to do. I'm toying with the idea of pixel-perfect collision detection but I'm not sure if it would really be worth it. Does anyone know how Braid does their collision detection? My game is also a 2D sidescroller and I was quite impressed with how it was handled in that game. Thanks for reading.

    Read the article

  • Ideas for my MSc project and Google Summer of Code 2011

    - by Chris Wilson
    I'm currently putting together ideas for my master's project which I'll be working on over the summer, and I would like to be able to use this time to help Ubuntu in some way. I have the freedom to come up with pretty much any project in the field of software development/engineering provided it Is a substantial piece of software (for reference, I will be working on it for five full months) Solves a problem for more people than just myself I was hoping to use this project as an opportunity to get some experience with the underbelly of Linux, so that I can mention on my CV that I have 'experience in developing for *NIX in C++', which I'm noticing more and more companies are looking for these days, probably because stuff's moving to cloud servers and that's where Linux rules the roost. My problem is that, since I don't have the experience to begin with, I'm not sure what to do for such a project, and I was wondering if anyone could help me with this. I've noticed from Daniel Holbach's blog that Ubuntu participated in the Google Summer of Code 2010, and that project ideas for that can be found here. However, I have not been able to find anything related to Ubuntu and GSoC 2011, but I have noticed from the GSoC timeline that the list of mentoring organisations will not be published until March 18th. I have two questions here. Has Ubuntu applied to be a part of Summer of Code 2011, and what is the status of the 2010 project list linked to earlier. Were they all implemented or are there still some that can be picked up now, should I not participate in GSoC? I'd like to do something for Ubuntu, but I'd rather not spend my time reinventing the wheel.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Special thanks to everyone that helped me in 2010.

    - by mbcrump
    2010 has been a very good year for me and I wanted to create a list and thank everyone for what they have done for me.  I also wanted to thank everyone for reading and subscribing to my blog. It is hard to believe that people actually want to read what I write. I feel like I owe a huge thanks to everyone listed below. Looking back upon 2010, I feel that I’ve grown as a developer and you are part of that reason. Sometimes we get caught up in day to day work and forget to give thanks to those that helped us along the way. The list below is mine, it includes people and companies. This list is obviously not going to include everyone that has helped, just those that have stood out in my mind. When I think back upon 2010, their names keep popping up in my head. So here goes, in no particular order.  People Dave Campbell – For everything he has done for the Silverlight Community with his Silverlight Cream blog. I can’t think of a better person to get recognition at the Silverlight FireStarter event. I also wanted to thank him for spending several hours of his time helping me track down a bug in my feedburner account. Victor Gaudioso – For his large collection of video tutorials on his blog and the passion and enthusiasm he has for Silverlight. We have talked on the phone and I’ve never met anyone so fired up for Silverlight. Kunal Chowdhury – Kunal has always been available for me to bounce ideas off of. Kunal has also answered a lot of questions that stumped me. His blog and CodeProject article have green a great help to me and the Silverlight Community. Glen Gordon – I was looking frantically for a Windows Phone 7 several months before release and Glen found one for me. This allowed me to start a blog series on the Windows Phone 7 hardware and developing an application from start to finish that Scott Guthrie retweeted.  Jeff Blankenburg – For listening to my complaints in the early stages of Windows Phone 7. Jeff was always very polite and gave me his cell phone number to talk it over. He also walked me through several problems that I was having early on. Pete Brown – For writing Silverlight 4 in Action. This book is definitely a labor of love. I followed Pete on Twitter as he was writing it and he spent a lot of late nights and weekends working on it. I felt a lot smarter after reading it the first time. The second time was even better. John Papa – For all of his work on the Silverlight Firestarter and the Silverlight community in general. He has also helped me on a personal level with several things. Daniel Heisler – For putting up with me the past year while we worked on many .NET projects together in 2010. Alvin Ashcraft – For publishing a daily blog post on the best of .NET links. He has linked to my site many times and I really appreciate what he does for the community. Chris Alcock – For publishing the Morning Brew every weekday. I remember when I first appeared on his site, I started getting hundreds of hits on my site and wondered if I was getting a DOS attack or something. It was great to find out that Chris had linked to one of my articles. Joel Cochran – For spending a week teaching “Blend-O-Rama”. This was my one of my favorite sessions of this year. I learned a lot about Expression Blend from it and the best part was that it was free and during lunchtime. Jeremy Likness – Jeremy is smart – very smart. I have learned a lot from Jeremy over the past year. He is also involved in the Silverlight community in every way possible, from forums to blog post to screencast to open source. It goes on and on. The people that I met at VSLive Orlando 2010. I had a great time chatting with Walt Ritscher, Wallace McClure, Tim Huckabee and David Platt. Also a special thanks to all of my friends on Twitter like @wilhil, @DBVaughan, @DataArtist, @wbm, @DirkStrauss and @rsringeri and many many more. Software Companies / Events / May of gave me FREE stuff. =) Microsoft (3) – I was sent a free coupon code by Microsoft to take the Silverlight 4 Beta Exam. I jumped on the offer and took the exam. It was great being selected to try out the exam before it goes public even though Microsoft eventually published a universal coupon code for everyone. I am still waiting to find out if I passed the exam. My fingers are crossed. Microsoft reaching out to me with some questions regarding the .NET Community. I’ve never had a company contact me with such interest in the community. Having a contest where 75 people could win a $100 gift certificate and a T-Shirt for submitting a Windows Phone 7 app. I submitted my app and won. All of the free launch events this year (Windows Phone 7, Visual Studio 2010, ASP.NET MVC). Wintellect – For providing an awesome day of free technical training called T.E.N. Where else can you get free training from some of the best programmers in the world? I also won a contest from them that included a NETAdvantage Ultimate License from Infragistics. VSLive – I attended the Orlando 2010 Conference and it was the best developer’s conference that I have ever attended. I got to know a lot of people at this conference and hang out with many wonderful speakers. I live tweeted the event and while it may have annoyed some, the organizers of VSLive loved it. I won the contest on Twitter and they invited me back to the 2011 session of my choice. This is a very nice gift and I really appreciate the generosity. BarcodeLib.com – For providing free barcode generating tools for a Non-Profit ASP.NET project that I was working on. Their third party controls really made this a breeze compared to my existing solution. NDepend – It is absolutely the best tool to improve code quality. The product is extremely large and I would recommend heading over to their site to check it out. Silverlight Spy – I was writing a blog post on Silverlight Spy and Koen Zwikstra provided a FREE license to me. If you ever wanted to peek inside of a Silverlight Application then this is the tool for you. He is also working on a version that will support OOB and Windows Phone 7. I would recommend checking out his site. Birmingham .NET Users Group / Silverlight Nights User Group – It takes a lot of time to put together a user group meeting every month yet it always seems to happen. I don’t want to name names for fear of leaving someone out but both of these User Groups are excellent if you live in the Birmingham, Alabama area. Publishing Companies Manning Publishing – For giving me early access to Silverlight 4 in Action by Pete Brown. It was really nice to be able to read this awesome book while Pete was writing it. I was also one of the first people to publish a review of the book. Sams Publishing and DZone – For providing a copy of Silverlight 4 Unleashed by Laurent Bugnion for me to review for their site. The review is coming in January 2011. Special Shoutout to the following 3rd Party Silverlight Controls It has been a great pleasure to work with the following companies on 3rd Party Control Giveaways every month. It always amazes me how every 3rd Party Control company is so eager to help out the community. I’ve never been turned down by any of these companies! These giveaways have sparked a lot of interest in Silverlight and hopefully I can continue giving away a new set every month. If you are a 3rd Party Control company and are interested in participating in these giveaways then please email me at mbcrump29[at]gmail[d0t].com. The companies below have already participated in my giveaways: Infragistics (December 2010) - Win a set of Infragistics Silverlight Controls with Data Visualization!  Mindscape (November 2010) - Mindscape Silverlight Controls + Free Mega Pack Contest Telerik (October 2010) - Win Telerik RadControls for Silverlight! ($799 Value) Again, I just wanted to say Thanks to everyone for helping me grow as a developer.  Subscribe to my feed

    Read the article

  • Kernel compile error with iw_ndis.c

    - by James
    Hi, I have a hp pavilion dm3t with intel HD graphics running ubuntu 10.10 64 bit. I'm trying to compile and install a patched kernel according to this, https://launchpad.net/~kamalmostafa/+archive/linux-kamal-mjgbacklight So I downloaded the tarball from here (linked to from the page above): http://kernel.ubuntu.com/git?p=kamal/ubuntu-maverick.git;a=shortlog;h=refs/heads/mjg-backlight I untar'd it to a directory, entered the directory and did: make defconfig I'm not sure if that's what I should have done but it was successful, so I did: make which seemed to work fine until it gave these errors: ubuntu/ndiswrapper/iw_ndis.c:1966: error: unknown field ‘num_private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1966: warning: initialization makes pointer from integer without a cast ubuntu/ndiswrapper/iw_ndis.c:1967: error: unknown field ‘num_private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: excess elements in struct initializer ubuntu/ndiswrapper/iw_ndis.c:1967: warning: (near initialization for ‘ndis_handler_def’) ubuntu/ndiswrapper/iw_ndis.c:1970: error: unknown field ‘private’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1970: warning: initialization makes integer from pointer without a cast ubuntu/ndiswrapper/iw_ndis.c:1970: error: initializer element is not computable at load time ubuntu/ndiswrapper/iw_ndis.c:1970: error: (near initialization for ‘ndis_handler_def.num_standard’) ubuntu/ndiswrapper/iw_ndis.c:1971: error: unknown field ‘private_args’ specified in initializer ubuntu/ndiswrapper/iw_ndis.c:1971: warning: initialization from incompatible pointer type make[2]: *** [ubuntu/ndiswrapper/iw_ndis.o] Error 1 make[1]: *** [ubuntu/ndiswrapper] Error 2 make: *** [ubuntu] Error 2 How can I compile and install this kernel successfully? I'm new to this and would appreciate any help.

    Read the article

  • Getting More Out of UPK

    - by [email protected]
    Are you getting the most out of UPK? Remember the idea of streamlining your content creation efforts? How about the concept of collaboration during development? How are you leveraging the System Process Documents or Test Scripts? Is your training team benefiting from the creation of process documentation? Is UPK linked into the help menu of your application or your even at the browser level (Smart Help)? Many customers underutilize UPK. Some customers just think of UPK as a training creation solution or just for creating documentation. To get the full value of UPK you need to first evaluate how the UPK developer is installed. Single User or Multi User? If you have more than two developers of UPK, then there is a significant benefit from installing UPK in multi user mode. This helps drive collaboration, automatic version control and better facilitation of the workflow and state features with use of customized views for the developers. Has your organization installed Usage Tracking? How are the outputs deployed and for how many applications? If these questions have you thinking about your overall usage of UPK and you see significant improvement by using more of what UPK has to offer, then it could be time for a UPK Health Check. Contact your UPK Sales Consultant to help understand your environment and how to maximize the value of UPK and start getting more out of the product.

    Read the article

  • Steve Jobs Goes On Medical. iPad 2 and iPhone 5 On Track.

    - by Gopinath
    Here is a bit of disappointing news for Apple fan boys. Steve Jobs is again going on medical leave as he wants to concentrate on his health for sometime. In an email to the employees of Apple Steve said, At my request, the board of directors has granted me a medical leave of absence so I can focus on my health..I will continue as CEO and be involved in major strategic decisions for the company.I have great confidence that Tim and the rest of the executive management team will do a terrific job executing the exciting plans we have in place for 2011   In the mail, Steve also said that plans for the product releases scheduled in 2011 will not be affected. This means as rumoured iPad 2 In April, iPhone 5 In June With New Hardware. There is not much information on the medical complications Steve is facing now, but many are thinking  its linked to the liver transplant he had in 2009. What ever may be reason, we wish for this speedy recovery. Here is the full content of the email Steve Jobs sent to all employees: Team, At my request, the board of directors has granted me a medical leave of absence so I can focus on my health. I will continue as CEO and be involved in major strategic decisions for the company. I have asked Tim Cook to be responsible for all of Apple’s day to day operations. I have great confidence that Tim and the rest of the executive management team will do a terrific job executing the exciting plans we have in place for 2011. I love Apple so much and hope to be back as soon as I can. In the meantime, my family and I would deeply appreciate respect for our privacy. Steve This article titled,Steve Jobs Goes On Medical. iPad 2 and iPhone 5 On Track., was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • dual boot Windows 7 and Ubuntu 11.04, black screen when loading Windows

    - by Sean
    I am proficient with Windows and not so much with Linux. Here is my story: Original system came with Windows 7, got openSUSE installed on the second hard drive, and dual boot for this setup worked fine. Wanted to switch to Windows 7 and Ubuntu 11.04 dual boot so I did a Windows system recovery and it appeared to give me back a fresh Windows 7 install. I then go to install Ubuntu 11.04 and the installer informs me I have multiple operating systems already installed. I go to the advanced partitioning option and sure enough Windows 7 is on /sda while openSUSE is still on /sdb. From here I followed this guide (How to dual-boot Linux and Ubuntu with two hard drives) after I had deleted all the openSUSE partitions on /sdb through the Allocate Drive Space tab of the installer. I make the /boot, swap, /, and /home partitions and set the GRUB into the MBR of the second disk (/dev/sdb). Everything installs fine. I reboot, Windows loads automatically, install EasyBCD and add an entry for Ubuntu into the Windows Boot Manager while assigning the type as GRUB2. Reboot the system and it now shows dual booting options for both Windows and Ubuntu. Problem is: while I can use Ubuntu fine when I try to boot into Windows it just gives me a black screen and after a little while the fans start running crazy. If I restart the computer I will sometimes get the message that my system was put into hibernation mode because the temperature got too high (90C) which I presume is in accordance with the fans going crazy. I have linked the output from the Boot Info Script below, any suggestions on how to fix this issue would be greatly appreciated! UPDATED SCRIPT OUTPUT Boot Info Script output: http://paste.ubuntu.com/682152/

    Read the article

  • Platform Builder: Removing the Version Information from the Desktop

    - by Bruce Eitman
    The question of how to remove the version information from the desktop has been around for a long time. It came up again today. The question is about the string displayed on the desktop that looks like one of these, depending on the OS verison: Windows Embedded CE v6.00 (Build xxxx on xxxx) Microsoft Windows CE v5.00 (Build xxxx on xxxx) Microsoft Windows CE .NET v4.20 (Build xxxx on xxxx) I have looked into this in the past, but never really had a definitive answer. I have an answer now. The short answer is that the version information is displayed if the code is built without SHIP_BUILD defined.  I have to be honest, I have given this answer in the newsgroups in the past, but I still had questions. My questions have come from different build machines giving different results.   I have noticed that some engineer’s workstations would have the version information displayed, while others did not. I always stopped short of spending time investigating further because our release build machines never resulted in the version information being displayed. But, we do not typically define SHIP_BUILD for our releases because our customers want or need the debug output. So today I dug further into the question. The answer is actually quite simple. Microsoft builds the retail shell libraries with SHIP_BUILD defined and releases the libraries with Platform Builder. Normally the source code does not need to be built during Sysgen, so the libraries that Microsoft delivered are linked to create the Explorer shell. So typically the Explorer shell displays the version information for debug builds, but does not for retail builds. The trouble comes when the source code is forced to be rebuilt for a retail build. This might happen if an engineer uses “Build and Sysgen” or builds the Public\Shell folder from the command line with the clean flag. I am not sure if Build and Sysgen will cause the problem or not – I have never used Build and Sysgen and I strongly advise against using it (see Platform Builder: Don’t use Build and Sysgen) Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Wordpress Multisite- How to restrict a non-wordpress page only to one domain

    - by yibe
    I am running three blogs from a wordpress multisite. Currently, I am developing a separate, non wordpress application. I want to link this application with blog1. I put the application in the subfoler of my root directory (The Wordpress multisite is installed in the root). I linked the application in the menu of blog1. So far so good. But, it is possible to access the application from all the three blogs. I want the application to be accessible only from blog1-domain/my-app/ but when I tried blog2-domain/my-app, it just opened normally. Generally, the application is accessible from all the three domains followed by /my-app. I don't think it is proper to go this way. Then, what can I do to restrict my app to be accessible only from a my blog1-domain/my-app ? Or am I not using a good blog structure?. I am open to all kind of suggestions. Thank you for your help.

    Read the article

  • scheme vs common lisp: war stories

    - by SuperElectric
    There are no shortage of vague "Scheme vs Common Lisp" questions on StackOverflow, so I want to make this one more focused. The question is for people who have coded in both languages: While coding in Scheme, what specific elements of your Common Lisp coding experience did you miss most? Or, inversely, while coding in Common Lisp, what did you miss from coding in Scheme? I don't necessarily mean just language features. The following are all valid things to miss, as far as the question is concerned: Specific libraries. Specific features of development environments like SLIME, DrRacket, etc. Features of particular implementations, like Gambit's ability to write blocks of C code directly into your Scheme source. And of course, language features. Examples of the sort of answers I'm hoping for: "I was trying to implement X in Common Lisp, and if I had Scheme's first-class continuations, I totally would've just done Y, but instead I had to do Z, which was more of a pain." "Scripting the build process in Scheme project, got increasingly painful as my source tree grew and I linked in more and more C libraries. For my next project, I moved back to Common Lisp." "I have a large existing C++ codebase, and for me, being able to embed C++ calls directly in my Gambit Scheme code was totally worth any shortcomings that Scheme may have vs Common Lisp, even including lack of SWIG support." So, I'm hoping for war stories, rather than general sentiments like "Scheme is a simpler language" etc.

    Read the article

  • Scheme vs Common Lisp: war stories

    - by SuperElectric
    There are no shortage of vague "Scheme vs Common Lisp" questions on both StackOverflow and on this site, so I want to make this one more focused. The question is for people who have coded in both languages: While coding in Scheme, what specific elements of your Common Lisp coding experience did you miss most? Or, inversely, while coding in Common Lisp, what did you miss from coding in Scheme? I don't necessarily mean just language features. The following are all valid things to miss, as far as the question is concerned: Specific libraries. Specific features of development environments like SLIME, DrRacket, etc. Features of particular implementations, like Gambit's ability to write blocks of C code directly into your Scheme source. And of course, language features. Examples of the sort of answers I'm hoping for: "I was trying to implement X in Common Lisp, and if I had Scheme's first-class continuations, I totally would've just done Y, but instead I had to do Z, which was more of a pain." "Scripting the build process in my Scheme project got increasingly painful as my source tree grew and I linked in more and more C libraries. For my next project, I moved back to Common Lisp." "I have a large existing C++ codebase, and for me, being able to embed C++ calls directly in my Gambit Scheme code was totally worth any shortcomings that Scheme may have vs Common Lisp, even including lack of SWIG support." So, I'm hoping for war stories, rather than general sentiments like "Scheme is a simpler language" etc.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >