Search Results

Search found 9558 results on 383 pages for 'share this'.

Page 293/383 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • kubuntu: wireless manager can't find any networks

    - by timuçin
    I just installed Kubuntu 12.10 and as I knew my wireless driver would not be recognized, I installed it manually through here. I suppose I shall mention that I also had to do this. The driver worked just fine until I restarted my system. Since then I can't even start over. My wireless card is realtek 8723 The oly lead I have is this piece of log: 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> wpa_supplicant started 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0) supports 4 scan SSIDs 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0): supplicant interface state: starting -> ready 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0): device state change: unavailable -> disconnected (reason 'supplicant-available') [20 30 42] 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <warn> Trying to remove a non-existant call id. 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0): supplicant interface state: ready -> inactive 11/14/12 07:35:21 AM timucin-W150ER NetworkManager[989] <info> (wlan0) supports 4 scan SSIDs I feel really bad that I have been trying so many things and even while I am trying them I get a problem at every step. I am really frustrated. You can the time in the logs, I didn't get up early. Just needed to share my feelings. Even the ethernet cable I have use is short for God's sake, I have had to sit at this massively uncomfortable posture for hours. edit: I discovered something interesting. When I boot my windows I see that my wireless manager says "disconnected" now. However when I start ubuntu after windows and modprobe rtl8723e wireless works just fine. Then again, when I restart ubuntu I am back where I have started. So now I have to start windows, restart my machine and do the modprobe thing to see wireless networks.

    Read the article

  • So Much Happening at Devoxx

    - by Tori Wieldt
    Devoxx, the premier Java conference in Europe, has been sold out for a while. The organizers (thanks Stephan and crew!) cap the attendance to make sure all attendees have a great experience, and that speaks volumes about their priorities. The speakers, hackathons, labs, and networking are all first class. The Oracle Technology Network will be there, and if you were smart/lucky enough to get a ticket, come find us and join the fun: IoT Hack Fest Build fun and creative Internet of Things (IoT) applications with Java Embedded, Raspberry Pi and Leap Motion on the University Days (Monday and Tuesday). Learn from top experts Yara & Vinicius Senger and Geert Bevin at two Raspberry Pi & Leap Motion hands-on labs and hacking sessions. Bring your computer. Training and equipment will be provided. Devoxx will also host an Internet of Things shop in the exhibition floor where attendees can purchase Arduino, Raspberry PI and Robot starter kits. Bring your IoT wish list! Video Interviews Yolande Poirier and I will be interviewing members of the Java Community in the back of the Expo hall on Wednesday and Thursday. Videos are posted on Parleys and YouTube/Java. We have a few slots left, so contact me (you can DM @Java) if you want to share your insights or cool new tip or trick with the rest of the developer community. (No commercials, no fluff. Keep it techie and keep it real.)  Oracle Keynote Wednesday morning Mark Reinhold, Chief Java Platform Architect, and Brian Goetz, Java Language Architect will provide an update on Java 8 and beyond. Oracle Booth Drop by the Oracle booth to see old and new friends.  We'll have Java in Action demos and the experts to explain them and answer your questions. We are raffling off Raspberry Pi's each day, so be sure to get your badged scanned. We'll have beer in the booth each evening. Look for @Java in her lab coat.  See you at Devoxx! 

    Read the article

  • Using all Ten IO slots on a 7420

    - by user12620172
    So I had the opportunity recently to actually use up all ten slots in a clustered 7420 system. This actually uses 20 slots, or 22 if you count the clusteron card. I thought it was interesting enough to share here. This is at one of my clients here in southern California. You can see the picture below. We have four SAS HBAs instead of the usual two. This is becuase we wanted to split up the back-end taffic for different workloads. We have a set of disk trays coming from two SAS cards for nothing but Exadata backups. Then, we have a different set of disk trays coming off of the other two SAS cards for non-Exadata workloads, such as regular user file storage. We have 2 Infiniband cards which allow us to do a full mesh directly into the back of the nearby, production Exadata, specifically for fast backups and restores over IB. You can see a 3rd IB card here, which is going to be connected to a non-production Exadata for slower backups and restores from it.The 10Gig card is for client connectivity, allowing other, non-Exadata Oracle databases to make use of the many snapshots and clones that can now be created using the RMAN copies from the original production database coming off the Exadata. This allows for a good number of test and development Oracle databases to use these clones without effecting performance of the Exadata at all.We also have a couple FC HBAs, both for NDMP backups to an Oracle/StorageTek tape library and also for FC clients to come in and use some storage on the 7420.  Now, if you are adding more cards to your 7420, be aware of which cards you can place in which slots. See the bottom graphic just below the photo.  Note that the slots are numbered 0-4 for the first 5 cards, then the "C" slots which is the dedicated Cluster card (called the Clustron), and then another 5 slots numbered 5-9. Some rules for the slots: Slots 1 & 8 are automatically populated with the two default SAS cards. The only other slots you can add SAS cards to are 2 & 7. Slots 0 and 9 can only hold FC cards. Nothing else. So if you have four SAS cards, you are now down to only four more slots for your 10Gig and IB cards. Be sure not to waste one of these slots on a FC card, which can go into 0 or 9, instead.  If at all possible, slots should be populated in this order: 9, 0, 7, 2, 6, 3, 5, 4

    Read the article

  • "Expecting A Different Result?" (2 of 3 in 'No Customer Left Behind' Series)

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development Many companies already have some type of customer experience initiative in process or one that could be framed as such. The challenge is that the initiatives too often are started in a department silo, don't have the right level of executive sponsorship, or have been initiated without the necessary insight and strategic business alignment. You can't keep doing the same things, give it a customer experience name, and expect a different result. You can't continue to just compete on price or features - that is not sustainable in commoditized markets. And ultimately, investing in technology alone doesn't solve customer experience problems; it just adds to the complexity of them. You need a customer experience strategy and approach on how to execute a customer-centric worldview within your business. To develop this, you must take an outside in journey on how your customers are interacting with your business to establish a benchmark of your customers' experiences. Then you must get cross-functional alignment on what you are trying to achieve, near, mid, and long term. Your execution of that strategy should be based on a customer experience approach: Understand your customer: You need to capture the insights across interactions, channels (including social), and personas to better understand whom to serve, how to serve them, and when to serve them. Not all experiences or customers are equal, so leverage this insight to understand the strategic business objectives you need to address. Then determine which experiences can be improved immediately and which over time to get the result you need. Empower your ecosystem: You need to align your front-line employees with your strategy and give them the power, insight, and tools that allow them to cultivate a culture around strengthening the relationships with your customers. You also need to provide the transparency, access, and collaboration that enable your customers and partners to self serve and self solve and to share with ease. Adapt your business: You need to enable the discipline of agility within your organization and infrastructure so that you can innovate, tailor, and personalize experiences. This needs to be done both reactively from insight and proactively in real time so you can stay ahead of shifting market trends and evolving consumer behaviors. No longer will the old approaches provide the same returns. To compete, differentiate, and win in a world where the customer has the power, you must execute a strategy that is sure to deliver a better brand experience for your customers. Note: This is Part 2 in a three-part series. Part 1 is here. Stop back for Part 3 on November 28.

    Read the article

  • Collaborative work (small team) - Best practices

    - by LEM01
    I'm currently working in a very small team of programmers (2-3) and I'm looking for advices/best practices on how to organise our work. We're all working on the same application using PHP. Today we're kind of all working on our way. Today situation: List item that have to be worked on by each dev 1/week. What has to be done is defined at a high functional level (ex: Build the search engine for this product..) Commit / merge our individual branches (git) every week before the next meeting No real dev rules, no code review No test written (aouutch) Problems faced: Code quality issue: discovering someone else code is sometime tough (inline, variable+function+class names, spaces, comments..) Changes in already existing classes (impact on someone else work) Responsibility of each dev unclear: after getting someone else code and discover something messy, should I make the change? Should he make the change? How to plan those things,... What I'm looking for: Basically I'm looking into structuring the way we develop things in order to avoid frustration and improve overall quality. How to define coding standards (naming convention, code rules...)? Do you you any validation script to make sure code is valid before committing? Do you think that defining an architect role in the team is needed? Someone that would actually define what has to be developed during the next phase. By defining interfaces or class descriptions that have to be written. (Does it make sense in such a small team?) Today we're losing time into understanding what others did or tried to do, we're also losing time in discussion like "you should have done it that way! Why is this class doing that and not that..? Shouldn't we have a embedded class rather that this set of data...". I'm looking into a work process, maybe with more defined responsibilities and process in order to improve our performance. If you have experience, advices, best practices or anything to share that we could benefit from it will be much appreciated! Thanks a lot for your time!

    Read the article

  • How can I tweak this A* search pathfinding algorithm to handle different terrain movement values?

    - by user422318
    I'm creating a 2D map-based action game with similar interaction design as Diablo II. In other words, the player clicks around a map to move their player. I just finished player movement and am moving on to pathfinding. In the game, enemies should charge the player's character. There are also five different terrain types that give different movement bonuses. I want the AI to take advantage of these terrain bonuses as they try to reach the player. I was told to check out the A* search algorithm (http://en.wikipedia.org/wiki/A*_search_algorithm). I'm doing this game in HTML5 and JavaScript, and found a version in JavaScript: http://www.briangrinstead.com/blog/astar-search-algorithm-in-javascript I'm trying to figure out how to tweak it though. Below are my ideas about what I need to change. What else do I need to worry about? When I create a graph, I will need to initialize the 2D array I pass in passed on with a traversal of a map that corresponds to the different terrain types. in graph.js: "GraphNodeType" definition needs to be modified to handle the 5 terrain types. There will be no walls. in astar.js: The g and h scoring will need to be modified. How should I do this? in astar.js: isWall() should probably be removed. My game doesn't have walls. in astar.js: I'm not sure what this is. I think it indicates a node that isn't valid to be processed. When would this happen, though? At a high level, how do I change this algorithm from "oh, is there a wall there?" to "will this terrain get me to the player faster than the terrain around me?" Because of time, I'm also debating reusing my Bresenham algorithm for the enemies. Unfortunately, the different terrain movement bonuses won't be used by the AI, which will make the game suck. :/ I'd really like to have this in for the prototype, but I'm not a developer by trade nor am I a computer scientist. :D If you know of any code that does what I'm looking for, please share! Sanity check tips for this are also appreciated.

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Adding Facebook Open Graph Tags to an MVC Application

    - by amaniar
    If you have any kind of share functionality within your application it’s a good practice to add the basic Facebook open graph tags to the header of all pages. For an MVC application this can be as simple as adding these tags to the Head section of the Layouts file.<head> <title>@ViewBag.Title</title> <meta property="og:title" content="@ViewBag.FacebookTitle" /> <meta property="og:type" content="website"/> <meta property="og:url" content="@ViewBag.FacebookUrl"/> <meta property="og:image" content="@ViewBag.FacebookImage"/> <meta property="og:site_name" content="Site Name"/> <meta property="og:description" content="@ViewBag.FacebookDescription"/></head>  These ViewBag properties can then be populated from any action: private ActionResult MyAction() { ViewBag.FacebookDescription = "My Actions Description"; ViewBag.FacebookUrl = "My Full Url"; ViewBag.FacebookTitle = "My Actions Title"; ViewBag.FacebookImage = "My Actions Social Image"; .... } You might want to populate these ViewBag properties with default values when the actions don’t populate them. This can be done in 2 places. 1. In the Layout itself. (check the ViewBag properties and set them if they are empty) @{ ViewBag.FacebookTitle = ViewBag.FacebookTitle ?? "My Default Title"; ViewBag.FacebookUrl = ViewBag.FacebookUrl ?? HttpContext.Current.Request.RawUrl; ViewBag.FacebookImage = ViewBag.FacebookImage ?? "http://www.mysite.com/images/logo_main.png"; ViewBag.FacebookDescription = ViewBag.FacebookDescription ?? "My Default Description"; }  2. Create an action filter and add it to all Controllers or your base controller. public class FacebookActionFilterAttribute : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { var viewBag = filterContext.Controller.ViewBag; viewBag.FacebookDescription = "My Actions Description"; viewBag.FacebookUrl = "My Full Url"; viewBag.FacebookTitle = "My Actions Title"; viewBag.FacebookImage = "My Actions Social Image"; base.OnActionExecuting(filterContext); } } Add attribute to your BaseController. [FacebookActionFilter] public class HomeController : Controller { .... }

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Why is Java the lingua franca at so many institutions?

    - by Billy ONeal
    EDIT: This question at first seems to be bashing Java, and I guess at this point it is a bit. However, the bigger point I am trying to make is why any one single language is chosen as the one end all be all solution to all problems. Java happens to be the one that's used so that's the one I had to beat on here, but I'm not intentionality ripping Java a new one :) I don't like Java in most academic settings. I'm not saying the language itself is bad -- it has several extremely desirable aspects, most importantly the ability to run without recompilation on most any platform. Nothing wrong with using the language for Your Next App ^TM. (Not something I would personally do, but that's more because I have less experience with it, rather than it's design being poor) I think it is a waste that high level CS courses are taught using Java as a language. Too many of my co-students cannot program worth a damn, because they don't know how to work in a non-garbage-collected world. They don't fundamentally understand the machines they are programming for. When someone can work outside of a garbage collected world, they can work inside of one, but not vice versa. GC is a tool, not a crutch. But the way it is used to teach computer science students is a as a crutch. Computer science should not teach an entire suite of courses tailored to a single language. Students leave with the idea that all good design is idiomatic Java design, and that Object Oriented Design is the ONE TRUE WAY THAT IS THE ONLY WAY THINGS CAN BE DONE. Other languages, at least one of them not being a garbage collected language, should be used in teaching, in order to give the graduate a better understanding of the machines. It is an embarrassment that somebody with a PHD in CS from a respected institution cannot program their way out of a paper bag. What's worse, is that when I talk to those CS professors who actually do understand how things operate, they share feelings like this, that we're doing a disservice to our students by doing everything in Java. (Note that the above would be the same if I replaced it with any other language, generally using a single language is the problem, not Java itself) In total, I feel I can no longer respect any kind of degree at all -- when I can't see those around me able to program their way out of fizzbuzz problems. Why/how did it get to be this way?

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • "You are missing the following 32-bit libraries, and Steam may not run: libc.so.6" The common fixes don't work,

    - by M_Steam_User
    So I know this is a problem that has been asked around a lot, but I've tried a bunch of solutions with no success. I'm running Ubuntu 12.04 (64 bit), and I just installed it yesterday. This is my first time working with linux. The error is: You are missing the following 32-bit libraries, and Steam may not run: libc.so.6 Things I've tried. First, I had downloaded from the steam website. I uninstalled it, and tried again from the ubuntu software centre. sudo apt-get update sudo apt-get install ia32-libs sudo apt-get upgrade This installed a bunch of the 32 bit libraries, but did not fix the issue. This seems like the major fix for most people. The direct approach of sudo apt-get install libc.so.6 returns this: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package libc.so.6 E: Couldn't find any package by regex 'libc.so.6' I guess libc.so.6 isn't a package, just a single file or something? I also tried gksudo gedit /etc/ld.so.conf.d/steam.conf Added these two lines, those the second one was all ready in the file, but copied over: /usr/lib32 /usr/lib/i386-linux-gnu/mesa Then executed: sudo ldconfig But nothing seemed to happen, steam still doesn't work. So, I feel like it is more likely that I have the library and steam isn't looking in the right place. One thing I've seen is people usually reference /usr/local/lib/ for your library locations. However, I can't find where to cd into /usr/, it isn't in my home folder. If /usr/ is the home folder, there is only a /.local folder which only has /share, no lib anywhere. Sorry for my linux ignorance. I appreciate any help, I honestly have no idea how to confirm I have the library and point steam to it, or if that is even the right thing to do. Edit: Tried this, not entirely sure what it means ~$ ls -l /lib32/libc* -rwxr-xr-x 1 root root 1721832 Sep 30 11:06 /lib32/libc-2.15.so -rw-r--r-- 1 root root 185928 Sep 30 11:06 /lib32/libcidn-2.15.so lrwxrwxrwx 1 root root 15 Sep 30 11:06 /lib32/libcidn.so.1 -> libcidn-2.15.so -rw-r--r-- 1 root root 34316 Sep 30 11:06 /lib32/libcrypt-2.15.so lrwxrwxrwx 1 root root 16 Sep 30 11:06 /lib32/libcrypt.so.1 -> libcrypt-2.15.so lrwxrwxrwx 1 root root 12 Sep 30 11:06 /lib32/libc.so.6 -> libc-2.15.so

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How do I prevent ISPs from killing downloads of files in mid-transfer?

    - by Gorchestopher H
    I run a small website with a few users, low traffic, mostly to share personal mp3 files with a small community. Depending on their ISP, my users can't always download or stream larger files. By larger I mean larger than 1MB. Essentially the host either stops sending, or the client stops receiving. One of the links along the connection chain simply ends its connection before the transfer completes Trace-route shows no connection issues. There are no connection issues with short transfers that don't take more than a few seconds. It's these 10 second transfers that just end up ending. Just doing a straight download with a direct link can yield this error if you have the wrong ISP. Strangely enough, this is most common with users with ISPs who are essentially independent providers that buy service via a fiber link. Unfortunately these providers aren't very knowledgeable, are unable to do any testing, and insist it's a problem with the host. I have gotten my host to transfer my site to different servers of their, to the same effect. Nearly identical sites (affiliate sites actually) experience no such issue. What can I be doing to further troubleshoot this matter? How can I prove that someone is dropping the ball, and identify who that party is? Can I do a 5Mb traceroute? EDIT Maybe I can clear up some misconceptions with my question: The files are not very large. They are simply over 2Mb. The users do not have "slow" connections, they are at least 5mbps. This "time out" happens very quickly, in the realm of 5 seconds, so I don't know if it's a timeout or not. The user often gets 1 or 2Mb in this chunk of time. I have tried streaming with a flash player. I have tried saving the target. Forcing the download. I have tried allowing the browser to stream the file. I have tried different browsers (FF, IE, Chrome). Users are able to download identical files when on different hosts.

    Read the article

  • Developing web sites that imitate desktop apps. How to fight that paradigm? [closed]

    - by user1598390
    Supposse there's a company where web sites/apps are designed to resemble desktop apps. They struggle to add: Splash screens Drop-down menus Tab-pages Pages that don't grow downward with content, context is inside scrollable area so page is of a fixed size, as if resembling the one-screen limitation of desktop apps. Modal windows, pop-ups, etc. Tree views Absolutely no access to content unless you login-first, even with non-sensitive content. After splash screen desapears, you are presented with a login screen. No links - just simulated buttons. Fixed page-size. Cannot open a linked in other tab Print button that prints directly ( not showing printable page so the user can't print via the browser's print command ) Progress bars for loading content even when the browser indicates it with its own animation Fonts and color amulate a desktop app made with Visual Basic, PowerBuilder etc. Every app seems almost as if were made in Visual Basic. They reject this elements: Breadcrumbs Good old underlined links Generated/dynamic navigation, usage-based suggestions Ability to open links in multiple tabs Pagination Printable pages Ability to produce a URL you can save or share that links to an item, like when you send someone the link to an especific StackExchange question. The only URL is the main one. Back button To achieve this, tons of javascript code is needed. Lots and lots of Javascript and Ajax code for things not related with the business but with the necessity to hide/show that button, refresh this listbox, grey-out that label, etc. The coplexity generated by forcing one paradigm into another means most lines of code are dedicated to maintain the illusion of a desktop app. What is the best way to change this mindset, and make them embrace the web, and start producing modern, web apps instead of desktop imitations ? EDIT: These sites are intranet sites. Users hate these apps. They constantly whine about them, but they have to use them to do their daily work. These sites are in-house solutions, the end-users have no choice but to use them. They are a "captive audience". Also, substitution will not happen because of high costs. But at least if that mindset is changed, new developments would be more web-like.

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • EF Doesn't Like Same Named Tables

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2013/07/02/153327.aspxIt's another week and another restriction imposed by the Entity Framework (EF). Don't get me wrong. I like EF, but I don't like how it restricts you in different ways. At this point you may be asking yourself the question: how can you have more than one table with the same name?The answer is to have tables in different schemas. I do this to partition the data based on the area of concern. It allows security to be assigned conveniently. A lot of people don't use schemas. I love them. But this article isn't about schemas.In the situation I have two tables:Contact.PersonEmployee.PersonThe first contains the basic, more public information such as the name. The second contains mostly HR specific information. I then mapped these tables to two classes. I stuck to a Table per Class (TPC) mapping, because of problems I've had in the past implementing inheritance with EF. The following code gives you the basic contents of the classes.[Table("Person", Schema = "Employee")]public class Employee {   ...   public int PersonId { get; set; }   [ForeignKey("PersonId")]   public virtual Person Person { get; set; }}[Table("Person", Schema = "Contact")]public class Person {   [Key]   public int Id { get; set; }   ...}This seemingly simple scenario just doesn't work. The problem occurs when you try to add a Person to the DbContext. You get an InvalidOperationException with the following text:The entity types 'Employee' and 'Person' cannot share table 'People' because they are not in the same type hierarchy or do not have a valid one to one foreign key relationship with matching primary keys between them..This is interesting for a couple of reasons. First, there is no People table in my database. Second, I have used the SetInitializer method to stop a database from being created, so it shouldn't be thinking about new tables.The solution to my problem was to change the name of my Employee.Person table. I decided to name it Employee.Employee. It's not ideal, but it gets me past the EF limitation. I hope that this article will help someone else that has the same problem.

    Read the article

  • How to Export a Contact to and Import a Contact from a vCard (.vcf) File in Outlook 2013

    - by Lori Kaufman
    vCard is the abbreviation for Virtual Business Card and is the standard format (.vcf files) for electronic business cards. vCards allow you to create and share contact information over the internet, such as in email messages and instant messaging. You can also use vCards to move contact information from one email or personal information management program to another, as long as both programs support the .vcf file format. vCards can contain name and address information, as well as phone numbers, email addresses, URLs, images, and audio clips. We will show you how to export a contact to and import a contact from a vCard, or .vcf file, in Outlook. First access the People section by clicking People at the bottom of the Outlook window. To view your contact in business card format, click Business Card in the Current View section of the Home tab. Select a contact by clicking on the name bar at the top of the business card. To export the selected contact as a vCard, click the File tab. On the Account Information screen, click Save As in the list of options on the left. The Save As dialog box displays. By default, the name of the contact is used to name the .vcf file in the File name edit box. Change the name, if desired, select a location for the file, and click Save. The contact is saved as a .vcf file. To import a vCard, or .vcf file, into Outlook, simply double-click on the .vcf file. By default, .vcf files are automatically associated with Outlook, so the file is opened in Outlook as a Contact. Make any changes or additions to the contact in the contact editing window. To save the contact, click Save & Close in the Actions section of the Contact tab. NOTE: Notice that because this contact is new, the full contact editing window displays rather than the Contact Card that displays when double-clicking on a contact. You can open the full contact editing window instead of the Contact Card when editing a contact or searching for a contact. The contact is added to the Contacts folder. You can add your contact information to a signature in business card format, and it will display as shown above in emails. We have covered how to create signatures and will be discussing more about signatures and business cards in Outlook.     

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • The term "interface" in C++

    - by Flexo
    Java makes a clear distinction between class and interface. (I believe C# does also, but I have no experience with it). When writing C++ however there is no language enforced distinction between class and interface. Consequently I've always viewed interface as a workaround for the lack of multiple inheritance in Java. Making such a distinction feels arbitrary and meaningless in C++. I've always tended to go with the "write things in the most obvious way" approach, so if in C++ I've got what might be called an interface in Java, e.g.: class Foo { public: virtual void doStuff() = 0; ~Foo() = 0; }; and I then decided that most implementers of Foo wanted to share some common functionality I would probably write: class Foo { public: virtual void doStuff() = 0; ~Foo() {} protected: // If it needs this to do its thing: int internalHelperThing(int); // Or if it doesn't need the this pointer: static int someOtherHelper(int); }; Which then makes this not an interface in the Java sense anymore. Instead C++ has two important concepts, related to the same underlying inheritance problem: virtual inhertiance Classes with no member variables can occupy no extra space when used as a base "Base class subobjects may have zero size" Reference Of those I try to avoid #1 wherever possible - it's rare to encounter a scenario where that genuinely is the "cleanest" design. #2 is however a subtle, but important difference between my understanding of the term "interface" and the C++ language features. As a result of this I currently (almost) never refer to things as "interfaces" in C++ and talk in terms of base classes and their sizes. I would say that in the context of C++ "interface" is a misnomer. It has come to my attention though that not many people make such a distinction. Do I stand to lose anything by allowing (e.g. protected) non-virtual functions to exist within an "interface" in C++? (My feeling is the exactly the opposite - a more natural location for shared code) Is the term "interface" meaningful in C++ - does it imply only pure virtual or would it be fair to call C++ classes with no member variables an interface still?

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

  • Can't install Git

    - by davemc
    Im following the tutorial below to install git. https://help.github.com/articles/set-up-git However when I get to the end where I need to install the helper into the same directory where Git itself is installed i get the following error: Davids-iMac:~ davidcavanagh$ which git /usr/bin/git Davids-iMac:~ davidcavanagh$ sudo mv git-credential-osxkeychain /usr/bin mv: rename git-credential-osxkeychain to /usr/bin/git-credential-osxkeychain: No such file or directory Davids-iMac:~ davidcavanagh$ Edit: I am now getting the following error when I install git and then run git -version Davids-iMac:~ davidcavanagh$ git -version /usr/bin/git: line 1: syntax error near unexpected token `newline' /usr/bin/git: line 1: `<?xml version="1.0" encoding="UTF-8"?>' I was following this tutorial guide:https://help.github.com/articles/set-up-git I have also tried using home-brew as well and I get the following error when I do this: Davids-iMac:~ davidcavanagh$ ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)" ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press ENTER to continue or any other key to abort ==> Downloading and Installing Homebrew... Failed during: git init -q Can anyone help? Thanks

    Read the article

  • cpan won't configure correctly on centos6, can't connect to internet

    - by dan
    I have Centos 6 setup and have installed perl-CPAN. When I run cpan it takes me through the setup and ends by telling me it can't connect to the internet and to enter a mirror. I enter a mirror, but it still can't install the package. What am I doing wrong? If you're accessing the net via proxies, you can specify them in the CPAN configuration or via environment variables. The variable in the $CPAN::Config takes precedence. <ftp_proxy> Your ftp_proxy? [] <http_proxy> Your http_proxy? [] <no_proxy> Your no_proxy? [] CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror CPAN needs access to at least one CPAN mirror. As you did not allow me to connect to the internet you need to supply a valid CPAN URL now. Please enter the URL of your CPAN mirror mirror.cc.columbia.edu::cpan Configuration does not allow connecting to the internet. Current set of CPAN URLs: mirror.cc.columbia.edu::cpan Enter another URL or RETURN to quit: [] New urllist mirror.cc.columbia.edu::cpan Please remember to call 'o conf commit' to make the config permanent! cpan shell -- CPAN exploration and modules installation (v1.9402) Enter 'h' for help. cpan[1]> install File::Stat CPAN: Storable loaded ok (v2.20) LWP not available Warning: no success downloading '/root/.cpan/sources/authors/01mailrc.txt.gz.tmp918'. Giving up on it. at /usr/share/perl5/CPAN/Index.pm line 225 ^CCaught SIGINT, trying to continue Warning: Cannot install File::Stat, don't know what it is. Try the command i /File::Stat/ to find objects with matching identifiers. cpan[2]>

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >