Search Results

Search found 9147 results on 366 pages for 'big smile'.

Page 258/366 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • SQL SERVER – Fix: Error: 147 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference

    - by pinaldave
    Everybody was beginner once and I always like to get involved in the questions from beginners. There is a big difference between the question from beginner and question from advanced user. I have noticed that if an advanced user gets an error, they usually need just a small hint to resolve the problem. However, when a beginner gets error he sometimes sits on the error for a long time as he/she has no idea about how to solve the problem as well have no idea regarding what is the capability of the product. I recently received a very novice level question. When I received the problem I quickly see how the user was stuck. When I replied him with the solution, he wrote a long email explaining how he was not able to solve the problem. He thanked multiple times in the email. This whole thing inspired me to write this quick blog post. I have modified the user’s question to match the code with AdventureWorks as well simplified so it contains the core content which I wanted to discuss. Problem Statement: Find all the details of SalesOrderHeaders for the latest ShipDate. He comes up with following T-SQL Query: SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = MAX(ShipDate) GO When he executed above script it gave him following error: Msg 147, Level 15, State 1, Line 3 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. He was not able to resolve this problem, even though the solution was given in the query description itself. Due to lack of experience he came up with another version of above query based on the error message. SELECT * FROM [Sales].[SalesOrderHeader] HAVING ShipDate = MAX(ShipDate) GO When he ran above query it produced another error. Msg 8121, Level 16, State 1, Line 3 Column ‘Sales.SalesOrderHeader.ShipDate’ is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. What he wanted actually was the SalesOrderHeader all the Sales shipped on the last day. Based on the problem statement what the right solution is as following, which does not generate error. SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = (SELECT MAX(ShipDate) FROM [Sales].[SalesOrderHeader]) Well, that’s it! Very simple. With SQL Server there are always multiple solution to a single problem. Is there any other solution available to the problem stated? Please share in the comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • I owe you an explanation

    - by Blueberry Coder
    Welcome to my blog! I am Frédéric Desbiens, a new member of the ADF Product Management team.  I joined Oracle only a few weeks ago. My boss is Grant Ronald, and I have the privilege to work in the same team as Susan Duncan, Frank Nimphius, Lynn Munsinger and Chris Muir. I share with them a passion for all things Java and ADF. With this blog, I hope to help you be more successful with our products – whether you are a customer or a partner. You may have heard of me before. Maybe you have my book in your bookshelf; or maybe we met at a conference. I went to JavaOne, ODTUG Kaleidoscope and Oracle OpenWorld in the past, when I worked for a major consulting firm. I will spare you all the details of my career; you can have a look at my LinkedIn profile if you are curious about my past.  Usually, my posts will be of a technical nature, and will focus on Oracle ADF and Oracle JDeveloper. SOA and portals have always been two topics of interest for me, however, and I will write about them. Over time, you will probably get acquainted with my « strategic » side as well. I devour history books, and always had a tendency to look at the big picture. I will probably not resist to the temptation of mixing IT and history, but this will be occasional, I promise!  At this point, I owe you an explanation about the title of the blog. I am French-Canadian, and wanted to evoke my roots in an obvious yet unobtrusive way. I was born in Chicoutimi, which is one of the main cities found in the Saguenay-Lac-Saint-Jean region. Traditionally, a large part of the wild blueberry production of the province of Québec come from there. A common nickname for the inhabitants is thus Les Bleuets, « The Blueberries » in English. I hope to see you around. You can also follow me on Twitter under  @BlueberryCoder.

    Read the article

  • Getting Started Plugging into the "Find in Projects" Dialog

    - by Geertjan
    In case you missed it amidst all the code in yesterday's blog entry, the "Find in Projects" dialog is now pluggable. I think that's really cool. The code yesterday gives you a complete example, but let's break it down a bit and deconstruct down to a very simple hello world scenario. We'll end up with as many extra tabs in the "Find in Projects" dialog as we need, for example, three in this case:  And clicking on any of those extra tabs will, in this simple example, simply show us this: Once we have that, we'll be able to continue adding small bits of code over the next few blog entries until we have something more useful. So, in this blog entry, you'll literally be able to display "Hello World" within a new tab in the "Find in Projects" dialog: import javax.swing.JComponent; import javax.swing.JLabel; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.NotificationLineSupport; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider1 extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Demo Extension 1"; } public class ExampleSearchPresenter extends SearchProvider.Presenter { private ExampleSearchPresenter(ExampleSearchProvider1 sp) { super(sp, true); } @Override public JComponent getForm() { return new JLabel("Hello World"); } @Override public SearchComposition composeSearch() { return null; } @Override public boolean isUsable(NotificationLineSupport nls) { return true; } } } That's it, not much code, works fine in NetBeans IDE 7.2 Beta, and is easier to digest than the big chunk from yesterday. If you make three classes like the above in a NetBeans module, and you install it, you'll have three new tabs in the "Find in Projects" dialog. The only required dependencies are Dialogs API, Lookup API, and Search in Projects API. Read the javadoc linked above and then in next blog entries we'll continue to build out something like the sample you saw in yesterday's blog entry.

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CES 2011–Microsoft Keynote Impressions

    - by guybarrette
    Microsoft has been kicking off the CES for a number of years by doing a keynote the evening of the event first day.  This year, SteveB talked about Xbox, Kinect, Windows 7 new laptops, Surface 2 and Windows vNext running on the ARM architecture. Some of the design of the new laptops showed are quite amazing.  This one has a dual screen with no physical keyboard.  The image is split between both screens.  A software keyboard appears when you place your 10 fingers on the lower screen. This one from Samsung has a sliding keyboard somewhat like numerous cell phones have. What I found the most amazing is that Intel was able to miniaturized a full Intel architecture (CPU, motherboard, memory) in a tiny form factor.  Imagine having the power of a full PC running .NET apps in a Zune/iPod form factor! They also showed V2 of the Surface device.  This one is called the Samsung SUR40 for Surface PC.  It’s much sleeker and it will likely loose the BAT (Big Ass Table) moniker  More info here SteveB announced that Windows vNext will run on ARM chips.  I’m intrigued by this announcement (you can read about it here) and I have many questions: -In the past ARM devices were slow, what now makes the ARM architecture able to run Windows? -ARM is 32-bit only, I think. -Does this mean that Intel wasn't able to provide such a lightweight architecture or simply that they weren't interested? -From what I understand, apps would need to be recompiled for ARM. Will we need to do that from an ARM PC or could it be done natively on Intel or on an Intel PC running in an ARM VM?  VS 2012? Ahhhh, smells like a cool PDC is coming up    Clearly it looks like PC have enough power for most of us right now and that the race is now about miniaturization, power consumption and battery life. You can watch the Microsoft CES 2011 keynote here var addthis_pub="guybarrette";

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • Designing a completly new database/gui solution for my compnay

    - by user1277304
    I'm no expert when it come to Everything Visual Studio 2010 and utilizing SQL server 2008. I'm sure some of my personal projects I've built for personal use would get laughed off the face of the planet, but SQLCe has been the solution I was looking for those home type of projects. And they work, flawlessly. Now I feel it's time to step up to the big league. I want to develop a complete, unified and module based solution for my compnay that I'm working for. We're still using stuff from the 80s for goodness sake! I use Excel and query the ancient database on my own because I can't stand the GUI. Nothing against people of age, but the IDE our programmers are using is from the stone age, and they use APL of all things with it. I've yet to see a radio buttton control anywhere in the GUI where it would make sense. Anyway, I want to do this right from the ground up. I'm by no means a newbie when it comes to programming in .NET 2010, however, I want the entire solution to be professionaly done. I want version control, test projects, project flow, SQL 2008 integration and all the bells and whistles that come with that. I know for a fact that if we had something like that runnning, not only would development costs and time be slashed four fold, but the possibilities for expansion and performance would sky rocket. (Between the GUI an our DB engine, it can only use ONE CORE! ONE! It's 2012 for goodness sake!) Our buisness is growing and our current ancient solution just can't keep up, and I'd hate to see our buisness go down in flames because our programmer is stuck in the 80's and refuses to use anything current. So I ask you guys, the experts and know-it-alls, where do I start? Are there any gems of good books out there in the haystack of all "This for dummies" type of deals? I already have several people backing me in this endevour, and while it may seem brash to just usurp the current programmers, I'm doing this for the company as a whole. Thank you guys for your time.

    Read the article

  • Can working exclusively with niche apps or tech hurt your career in software development? How to get out of the cycle? [closed]

    - by Keoma
    I'm finding myself in a bit of a pickle. I've been at a pretty comfortable IT group for almost a decade. I got my start here working on web development, mostly CRUD, but have demonstrated the ability to figure out more complex problems. I'm not a rock star, but I have received many compliments on my programming aptitude, and technologists and architects have commented on my ability to pick things up (for example, I recently learned a very popular web framework that shall remain nameless since I don’t want to be identified). My problem is that, over time, my responsibilities have been shifting towards work such as support or ‘development’ with some rather niche products (afraid to mention here due to potential for being identified). Some of this work, if it includes anything resembling coding, is very menial scripting in languages such as Powershell or VBScript. The vast majority of the time, however, a typical day consists of going back and forth with the product’s vendor support to send them logs and apply configuration changes or patches they recommend. I’m basically starved for some actual software development. However, even though I’m more than capable of doing that development work (and actually do a much better job at it than anything else), our boss is more interested in the kind of work I mentioned above, her reasoning being that since no one else in the organization wants to do it, it must mean job security. This has been going on for close to 3 years, and the only reason I have held on is on the promise that we would eventually get more development projects assigned to us. Well, that turned out not to be true at all. A recent talk with the boss has just made it more explicitly clear, as she told me in no uncertain terms that it’s very likely that development work (web or otherwise) would go to another group. The reason given to me is that our we don’t have enough resources in our group to handle that. So now I find myself in the position that I either have to stay in what has essentially become a dead end IT job that is tied to the fortunes of a niche stack of apps, or try to find a position that will be better for my long term career. My problem (is it a problem?), however, is that compared to others, my development projects in the last three years are very sparse in number. To compound things, projects using the latest and most popular frameworks, amount to the big fat number of just one—with no work of that kind in the foreseeable future. I am very concerned that this sparseness in my resume is a deficit, and that it will hurt my chances of landing a different job. I’m also wondering how much it will hurt me, and whether that can be ameliorated with hobby projects of my own. I guess I’m looking for opinions. Thank you very much for reading.

    Read the article

  • Actor and Sprite, who should own these properties?

    - by Gerardo Marset
    I'm writing sort of a 2D game engine for making the process of creating games easier. It has two classes, Actor and Sprite. Actor is used for interactive elements (the player, enemies, bullets, a menu, an invisible instance that controls score, etc) and Sprite is used for animated (or not) images with transparency (or not). The actor may have an assigned sprite that represents it on the screen, which may change during the game. E.g. in a top-down action game you may have an actor with a sprite of a little guy that changes when attacking, walking, and facing different directions, etc. Currently the actor has x and y properties (its coordinates in the screen), while the sprite has an index property (the number of the frame currently being shown by the sprite). Since the sprite doesn't know which actor it belongs to (or if it belongs to an actor at all), the actor must pass its x and y coordinates when drawing the sprite. Also, since a actors may reset its sprite each frame (and usually do), the sprite's index property must be passed from the old to the new sprite like so (pseudocode): function change_sprite(new_sprite) old_index = my.sprite.index my.sprite = new_sprite() my.sprite.index = old_index % my.sprite.frames end I always thought this was kind of cumbersome, but it never was a big problem. Now I decided to add support for more properties. Namely a property to draw the sprite rotated, a property to draw it flipped, it a property draw it stretched, etc. These should probably belong to the sprite and not the actor, but if they do, the actor would have to pass them from the old to the new sprite each time it changes... On the other hand, if they belonged to the actor, the actor would have to pass each property to the sprite when drawing it (since the sprite doesn't know which actor it belongs to, and it shouldn't, since sprites aren't just meant to be used by actors, really). Another option I thought of would be having an extra class that owns all these properties (plus index, x and y) and links an actor with a sprite, but that doesn't come without drawbacks. So, what should I do with all these properties? Thanks!

    Read the article

  • Investment scheme for a PC game the project

    - by Alex Kamen
    Good day everyone, I am working on a PC game project that has 3 phases planned, micro, macro and mmo versions [if confused, see a brief description at the bottom]. I have found a potential investor for the micro version of the game, but naturally, he requested a detailed plan of how the game will pay back. And the problem is that micro version itself is not supposed to be monetized much, other than some ads and limited in-game currency utilization. The idea is that with this combat demo already at hand, it should be possible to get a really large enough investment (millions of dollars) and use it to pay back the initial small one (thousands of dollars) and take the project into macro phase, which will really make profit. This way, everybody is going to win, provided that I can deliver the end-product. Yet while I am confident of that both the conception of the macro and the real game-play of the micro versions are going to be appealing, I don’t know how to obtain any guarantee of that I will be able to get funded once I have the prototype ready. And without that, I won’t receive the funds for the prototype in the first place! To summarize, my question is: how to figure out my future possibilities of getting funded once I have combat demo out, basically “whom to write to and what”. Ideally, I would like some sort of a preliminary agreement with a game publisher, something that would basically state “If the developer provides the product in time and in quality corresponding to the specifications given, the publisher guarantees to allocate funds for distribution and further development, thereby acquiring the right to X part of all future profits”. Does this sound sane? It’s just that I don’t want to sell all of my rights out straight away by taking a big outside investment while the project is in such early stage. I would appreciate if you would share your thoughts on this kind of scheme, and be sure to ask questions as I am sure I must have forgotten to mention a ton of important things, like the fact that initial funds are going to be spent on outsourcing (living in Siberia is really just great). [here’s a brief outline of what each version will feature] [micro] 1) turn based tactical combat rules 2) character development 3) arena/tournament system [macro] 4) ai-ruled dynamic interactive worlds 5) global map adventuring 6) strategic rpg + god simulator gameplay [mmo] 7) Persistent worlds system 8) Social structures system (“guilds/clans”) 9) god-simulation on the mmo scale P.S. Obviously, these features are incremental, so that mmo version has all 9.

    Read the article

  • How to move a website and domain name without experiencing downtime for emails or site?

    - by user4842
    Okay, I have a pretty complex problem, so I'll get right to it. I'm a designer who built a new website for my client. Their old site is hosted at GoDaddy, as well as their email. Problem is, the guy who built the original site decided to put the original domain name and hosting under HIS personal GoDaddy account. Well, that turned out to be a bad move for several reasons. Here's how it's all tied together. The original domain name, www.domainoriginal.com, was actually purchased at Network Solutions. The original web designer pointed the nameservers from Network Solutions to his GoDaddy account, where the email and hosting is setup. The new domain name, www.domainnew.com, was purchased under a new and separate GoDaddy account belonging to the company, and the new website was built under a 3rd party platform (Big Commerce). So, the www.domainnew.com is already pointed to the new website using A records at new GoDaddy account. All is fine there. However, they still need www.domainoriginal.com to point to the NEW website as well. (The old one can simply be deleted, it is NOT important). AND, they want to keep their old email addresses intact and working as well, but under the NEW GoDaddy account. Obviously, I have no DNS control at Network Solutions, and I have no idea what kind of control I have at GoDaddy under the old account because the web designer will not let me see inside his account. But, he and GoDaddy both tell me nothing can be done other than to repoint the nameservers to Network Solutions, and then repoint the A record to my new website, www.domainnew.com, and point the MX Records to GoDaddy. I'm told the downtime would be 24-48 hours if I do this. Ideally, we'd like to do a domain name transfer and get www.domainoriginal.com in the new GoDaddy account created by the company. But, I'm told this could take up to 7 days. Does this mean the site and email will be down for 7 days? And any emails sent during this time, would they be lost forever? If I do this, how long could I expect the site and email to go down? And, will the emails be permanently lost? I've gotten different answers from everybody at GoDaddy so I kind of don't trust them anymore... Any help would be greatly appreciated Thanks, Tyson

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Goodbye, Spreadsheets and Hello Modern ERP

    - by Christine Randle
    By: Steve Cox, Vice President, Oracle Accelerate for Midsize Companies     Signs of the resurging economy continue to sprout, with green shoots rising across different sectors and industries. With the economy on the rebound, businesses are increasing their investment in technology to keep up with growth and evolving demands; as proof, Gartner recently increased its worldwide IT spending forecast for 2012 to $3.6 trillion, anticipating a 3 percent increase from 2011 spending.   One of the segments most reliant on technology to catapult growth is midsize companies – established businesses leveraging every competitive efficiency and advantage to compete with much larger enterprises. We find that to compete against the big guys, they need to create an internal technology infrastructure to fuel that growth. Goodbye, spreadsheets and hello modern ERP.   While many businesses postponed upgrading or replacing financial and HR management systems during the recession, now some have started dusting off RFPs and revisiting technology options. Years ago, midsize organizations used spreadsheet-based systems and processes to manage employees, customers, partners, products and revenue. We’ve found that as companies scale up, they are apt to avoid heavily customizing their existing systems, and instead are more prone to standardize on a modern, enterprise-class ERP system.   Modern ERP platforms enable growing companies to immediately address the most pressing challenges – accounting, talent management, customer retention, et. al. Midsize companies implement these systems and processes to help them earn more, go public or expand globally.   And today, choice is a primary factor when selecting an ERP solution. Businesses have more deployment options now than ever before, depending on their unique structures and needs. Whether the preference is on demand, cloud, hosted or on premise, a modular, scalable deployment is available to meet the need.   With modern ERP systems, business that once struggled to do more with fewer resources have access to the same quality tools as larger competitors. By adopting top tier ERP systems tailored to individual business needs, midsize companies can support business operations while creating an enterprise system that seamlessly scales up to fuel future growth. Meaning that the ERP decision that your company makes today, will have legs to serve your business for years to come.

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Am I deluding myself? Business analyst transition to programmer

    - by Ryan
    Current job: Working as the lead business analyst for a Big 4 firm, leading a team of developers and testers working on a large scale re-platforming project (4 onshore dev, 4 offshore devs, several onshore/offshore testers). Also work in a similar capacity on other smaller scale projects. Extent of my role: Gathering/writing out requirements, creating functional specifications, designing the UI (basically mapping out all front-end aspects of the system), working closely with devs to communicate/clarify requirements and come up with solutions when we hit roadblocks, writing test cases (and doing much of the testing), working with senior management and key stakeholders, managing beta testers, creating user guides and leading training sessions, providing key technical support. I also write quite a few macros in Excel using VBA (several of my macros are now used across the entire firm, so there are maybe around 1000 people using them) and use SQL on a daily basis, both on the SQL compact files the program relies on, our SQL Server data and any Access databases I create. The developers feel that I am quite good in this role because I understand a lot about programming, inherent system limitations, structure of the databases, etc so it's easier for me to communicate ideas and come up with suggestions when we face problems. What really interests me is developing software. I do a fair amount of programming in VBA and have been wanting to learn C# for awhile (the dev team uses C# - I review code occasionally for my own sake but have not had any practical experience using it). I'm interested in not just the business process but also the technical side of things, so the traditional BA role doesn't really whet my appetite for the kind of stuff I want to do. Right now I have a few small projects that managers have given me and I'm finding new ways to do them (like building custom Access applications), so there's a bit here and there to keep me interested. My question is this: what I would like to do is create custom Excel or Access applications for small businesses as a freelance business (working as a one-man shop; maybe having an occasional contractor depending on a project's complexity). This would obviously start out as a part-time venture while I have a day job, but eventually become a full-time job. Am I deluding myself to thinking I can go from BA/part-time VBA programmer to making a full-time go of a freelance business (where I would be starting out just writing custom Excel/Access apps in VBA)? Or is this type of thing not usually attempted until someone gains years of full-time programming experience? And is there even a market for these types of applications amongst small businesses (and maybe medium-sized) businesses?

    Read the article

  • Setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • Independent Research on 1500 Companies Reveals Challenges in Performance Visibility – Part 1

    - by ndwyouell
    At the end of May I was joined by Professor Andy Neely of Cambridge University on a webinar, with an audience of over 700, to discuss the results of this extensive study which covered 13 countries and nearly every commercial and industrial sector.  What stunned both of us was not so much the number listening but the 100 questions they asked in just 1 hour.  This certainly represents a record in my experience and for those that organized the webinar. So what was all the fuss about?  Well, to begin with this was a pretty big sample and it represented organizations with over $100m sales across the USA, Europe, Africa and the Middle East. It also delivered some pretty interesting results across a wide range of EPM subjects such as profitability, planning and reporting.  Let’s look at some of those findings. We kicked off with profitability, one of the key factors in driving performance, or that is what you would think, but in fact 82% of our respondents said they did not have complete visibility into the profitability of their organization. 91% of these went further to say that, not surprisingly, this lack of knowledge into the profitability has implications with over half citing 3 or more implications.  Implications cited included misallocated resources, revenue opportunities not maximized, erroneous decisions made and impaired financial performance.  Quite a list of implications, especially given the difficult economic circumstances many organizations are operating in at this time. So why is this?  Well other results in the study point to some of the potential reasons.  Firstly 59% of respondents that use spreadsheets use them for monitoring profitability and 93% of all managers responding to the study use spreadsheets to gather and analyze information.  This is an enormous proportion given the problems with using spreadsheets based performance management systems that have been widely talked about for many years.  For profitability analysis this is particularly important when you consider the typical requirement will be to allocate cost and revenue across 6+ dimensions based on many different allocation methods.  Not something that can be done easily in spreadsheets plus it gets to be a nightmare once you want to change allocations, run different scenarios and then change the basis of your planning and budgeting! It is no wonder so many organizations have challenges in performance visibility. My next blog will look at the fragmented nature of many organizations’ planning.  In the meantime if you want to read the complete report on the research go to: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=10077790&src=7038701&Act=29

    Read the article

  • Traditional POS is Dead

    - by David Dorf
    Traditional POS is dead -- I've heard that one before. Here's an excerpt from Joe Skorupa's blog over at RIS where he relayed ten trends that were presented at NRF. 7. Mobile POS signals death of traditional POS. Shoppers don't love self-checkout, but they prefer it to long queues or dealing with associates. Fixed POS is expensive and bulky. Mobile POS frees floor space for other purposes and converts associates from being cashiers to being sales assistants that provide new levels of customer service and incremental basket sales. In addition to unplugging the POS, new alternatives are starting to take hold - thin client, POS as a service, and replacing POS software with e-commerce platforms. I'll grant that in some situations for some retailers there might be an opportunity to to ditch the traditional POS, but for the majority of retailers that's just not practical. Take it from a guy that had to wake up at 3am after every Thanksgiving to monitor POS systems across the US on Black Friday. If a retailer's website goes down on Black Friday, they will take a significant hit. If a retailer's chain-wide POS system goes down on Black Friday, that retailer will cease to exist. Mobile POS works great for Apple because the majority of purchases are one or two big-ticket items that don't involve cash. There's still a traditional POS in every store to fall back on (its just hidden). Try this at home: Choose your favorite e-commerce site and add an item to the cart while timing how long it takes. Now multiply that by 15 to represent the 15 items you might buy at store like Target. The user interface isn't optimized for bulk purchases, and that's how it should be. The webstore and POS are designed for different purposes. Self-checkout is a great addition to POS and so is mobile checkout. But they add capabilities to POS, not replace it. Centralized architectures, even those based in the cloud, are quite viable as long as there's resiliency in the registers. You cannot assume perfect access to the network, so a POS must always be able to sell regardless of connectivity. Clearly the different selling channels should be sharing common functionality. Things like calculating tax, accepting coupons, and processing electronic payments can be shared, usually through a service-oriented architecture. This lowers costs and providers greater consistency, both of which help retailers. On paper these technologies look really good and we should continue to push boundaries, but I'm not ready to call the patient dead just yet.

    Read the article

  • Help with Strategy-game AI

    - by f20k
    Hi, I am developing a strategy-game AI (think: Final Fantasy Tactics), and I am having trouble coming up for the design of the AI. My main problem is determining which is the optimal thing for it to do. First let me describe the priority of what action I would like the AI to take: Kill nearest player unit Fulfill primary directive (kill all player units, kill target unit, survive for x turns) Heal ally unit / cast buffer Now the AI can do the following in its turn: Move - {Attack / Ability / Item} (either attack or ability or item) {Attack / Ability / Item} - Move Move closer (if targets not in range) {Attack / Ability / Item} (if move not available) Notes Abilities have various ranges / effects / costs / effects. Each ai unit has maybe 5-10 abilities to choose from. The AI will prioritize killing over safety unless its directive is to survive for x turns. It also doesn't care about ability cost much. While a player may want to save a big spell for later, the AI will most likely use it asap. Movement is on a (hex) grid num of player units: 3-6 num of ai units: 3-7 or more. Probably max 10. AI and player take turns controlling ONE unit, instead of all at the same time. Platform is Android (if program doesnt respond after some time, there will be a popup saying to Force Quit or Wait - which looks really bad!). Now comes the questions: The best ability to use would obviously be the one that hits the most targets for the most damage. But since each ability has different ranges, I won't know if they are in range without exploring each possible place I can move to. One solution would be to go through each possible places to move to, determine the optimal attack at that location - which gives me a list of optimal moves for each location. Then choose the optimal out of the list and execute it. But this will take a lot of CPU time. Is there a better solution? My current idea is to move as close as possible towards the closest, largest group of people, and determine the optimal attack/ability from there. I think this would be a lot less work for the CPU and still allow for wide-range attacks. Its sub-optimal but the AI will still seem 'smart'. Other notes/questions: Am I over-thinking/over-complicating it? Better solution? I am open to all sorts of suggestions I have taken a look at the spell-casting question, but it doesn't take into account the movement - so perhaps use that algo for each possible move location? The top answer mentioned it wasn't great for area-of-effect and group fights - so maybe requires more tweaking? Please, if you mention a graph/tree, let me know basically how to use it. E.g. Node means ability, level corresponds to damage, then search for the deepest node.

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • BIDS Helper 1.6 Beta Release (now with SQL 2012 support!)

    - by Darren Gosbell
    The beta for BIDS Helper 1.6 was just released. We have not updated the version notification just yet as we would like to get some feedback on people's experiences with the SQL 2012 version. So if you are using SQL 2012, go grab it and let us know how you go (you can post a comment on this blog post or on the BIDS Helper site itself). This is the first release that supports SQL 2012 and consequently also the first release that runs in Visual Studio 2010. A big thanks to Greg Galloway for doing the bulk of the work on this release. Please note that if you are doing an xcopy deploy that you will need to unblock the files you download or you will get a cryptic error message. This appears to be caused by a security update to either Visual Studio or the .Net framework – the xcopy deploy instructions have been updated to show you how to do this. Below are the notes from the release page. ====== This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build Fixes and Updates The Unused Datasets feature for Reporting Services now accounts for new features in Reporting Services 2008 R2 like Lookups and new features in Reporting Services 2012. SSIS: emit an informational message when a variable has an expression defined and EvaluateAsExpression = False SSAS: roles reports points to wrong server SSIS - Variable Copy / Move broken in v1.5 "Unused DataSets Report" not showing up in Context menu on VS2005 if Solution Folders used SSAS Tabular: Create a UI for managing actions SSAS Tabular: Smart Diff improvements for new schema and Tabular models SSIS: Copy/Move Variable Erroring due to custom Control Flow item Icon SSIS Performance Visualization Index out of range fixing bugs in AggManager when aggregation design IDs don't match names The exe downloads are a self extracting installer, the zip downloads allow for an xcopy deploy. Make sure to note the updated xcopy deploy instructions for SQL Server 2012.

    Read the article

  • Small hiccup with VMware Player after upgrading to Ubuntu 12.04

    The upgrade process Finally, it was time to upgrade to a new LTS version of Ubuntu - 12.04 aka Precise Pangolin. I scheduled the weekend for this task and despite the nickname of Mauritius (Cyber Island) it took roughly 6 hours to download nearly 2.400 packages. No problem in general, as I have spare machines to work on, and it was weekend anyway. All went very smooth and only a few packages required manual attention due to local modifications in the configuration. With the new kernel 3.2.0-24 it was necessary to reboot the system and compared to the last upgrade, I got my graphical login as expected. Compilation of VMware Player 4.x fails A quick test on the installed applications, Firefox, Thunderbird, Chromium, Skype, CrossOver, etc. reveils that everything is fine in general. Firing up VMware Player displays the known kernel mod dialog that requires to compile the modules for the newly booted kernel. Usually, this isn't a big issue but this time I was confronted with the situation that vmnet didn't compile as expected ("Failed to compile module vmnet"). Luckily, this issue is already well-known, even though with "Failed to compile module vmmon" as general reason but nevertheless it was very easy and quick to find the solution to this problem. In VMware Communities there are several forum threads related to this topic and VMware provides the necessary patch file for Workstation 8.0.2 and Player 4.0.2. In case that you are still on Workstation 7.x or Player 3.x there is another patch file available. After download extract the file like so: tar -xzvf vmware802fixlinux320.tar.gz and run the patch script as super-user: sudo ./patch-modules_3.2.0.sh This will alter the existing installation and source files of VMware Player on your machine. As last step, which isn't described in many other resources, you have to restart the vmware service, or for the heart-fainted, just reboot your system: sudo service vmware restart This will load the newly created kernel modules into your userspace, and after that VMware Player will start as usual. Summary Upgrading any derivate of Ubuntu, in my case Xubuntu, is quick and easy done but it might hold some surprises from time to time. Nonetheless, it is absolutely worthy to go for it. Currently, this patch for VMware is the only obstacle I had to face so far and my system feels and looks better than before. Happy upgrade! Resources I used the following links based on Google search results: http://communities.vmware.com/message/1902218#1902218http://weltall.heliohost.org/wordpress/2012/01/26/vmware-workstation-8-0-2-player-4-0-2-fix-for-linux-kernel-3-2-and-3-3/ Update on VMware Player 4.0.3 Please continue to read on my follow-up article in case that you upgraded either VMware Workstation 8.0.3 or VMware Player 4.0.3.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >