Search Results

Search found 21061 results on 843 pages for 'bulid process'.

Page 345/843 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Getting "boot error" when trying to boot from USB

    - by Jon Ball
    I'm wanting to try out Ubuntu, so followed the instructions for how to install Ubuntu onto a USB. I downloaded the .iso file, then the pendrivelinux 3 part process to make the USB bootable. I can see what looks like a full list of files on the USB (including the wubi.exe application and the syslinux folder). When I try to restart the computer with the USB in, I get the Dell start up screen, and then a black screen with "Boot Error" in the top right hand corner. Setup options (default) are to boot from Removable Device, then Hard Disc. USB is brand new, straight out of the packet. Computer: Dell Inspiron 530S BIOS: 1.0.13 OS: Windows Vista Home Edition USB: EMTEC 8Gb, formatted to FAT32 I've tried some of the tips in other help topics (holding down CTRL key while restarting, removing all other USB devices). I tried to reformat the USB to something other than FAT32, but my only other options were NTFS or exFAT (not FAT16 which was suggested in another topic).

    Read the article

  • In centralized version control, is it always good to update often?

    - by janos
    Assuming that: You are in a team developing some software. Your team is using centralized version control in the development process. You are working on a new feature which will surely take several days to complete, and you won't be able to commit before that because it would break the build. Your team members commit something every day that affects some of the files you're working with for your fancy new feature. Since this is centralized version control, you will have to update your local checkout at some point: at least once right before committing the new feature. If you update only once right before your commit, then there might be a lot of conflicts due to the many other changes by your teammates, which could be a world of pain to resolve all at once. Or, you could update often, and even if there are a few conflicts to resolve day by day, it should be easier to do, little by little. Can we say that it is always a good idea to update often?

    Read the article

  • Can/should one record unstructured suggestions and feedback in an issue tracker?

    - by Ian Mackinnon
    I'd like to advocate the use of issue-tracking software within an organisation that currently does not use it. But there's one aspect of their situation for which I'm unsure of what to suggest: their projects frequently receive informal verbal feedback or casual comments in meetings or in passing from a wide group of interested parties, and all this information needs to be recorded. Most of these messages are noise, but they're vital to record and share with developers for two reasons: Good suggestions often come out of this process. It can be necessary to have evidence of clients' comments when they forget previous instructions or change their mind. Is this the sort of information that should be stored in an issue-tracking system, or kept apart in a separate solution? Are there issue-tracking systems that have particularly good support for this sort of unstructured information?

    Read the article

  • How to show other characters in online 2D rpg

    - by Loligans
    I have Player 1 and Player 2 I am using Json to send and retrieve player data between the client and the server, but when another player logs in, and is in the same map, how would I send that data to both players to update the graphics engine to show there are 2 Players on the map? About my game it is a 2D RPG tile based game it is 24x15 Tiles it is Real time Action it should interact anywhere between 10-150 ping players interact with each other when in the same map and can see each other moving around the game world is persistent, and is saved when the server shuts down Right now the server just sends the player Only their information which is inside a Json Object Here is an example of what I am talking about If you notice there are 2 separate characters in 2 separate clients, but they are running on the same server. I am trying to get them to show up on both clients, but I don't know how I should accomplish this. Should I send it as an added value in the Json object? Also what is the name of this process so I can look it up and find more info on it?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Wiki based requirements engineering tool

    - by Shanon
    Hi, I'm looking to to build a wiki based tool the helps/aides in the requirements engineering process. More specifically I am hoping to end up with a tool that helps inexperienced users easily create and design requirements documents on a wiki platform. I was wondering if there exist any wiki/wiki platforms that either already exist or are easily extendible or would be worth looking at that for this purpose. For instance some of the features I was hoping to add would be to add structure to a document so that information is filled out in a standardised manner. Another idea I was looking at was to somehow create relationships between different types of documents (for example- a goal diagram gets evolves/ helps in the development of the class diagram). So far I have come across FOSwiki which claims to to fully customisalble...but I'm not sure what it means and what I can really do with that. Any input on FOSwiki is also highly appreciated.

    Read the article

  • How to derive euler angles from matrix or quaternion?

    - by KlashnikovKid
    Currently working on steering behavior for my AI and just hit a little mathematical bump. I'm in the process of writing an align function, which basically tries to match the agent's orientation with a target orientation. I've got a good source material for implementing this behavior but it uses euler angles to calculate the rotational delta, acceleration, and so on. This is nice, however I store orientation as a quaternion and the math library I'm using doesn't provide any functionality for deriving the euler angles. But if it helps I also have rotational matrices at my disposal too. What would be the best way to decompose the quaternion or rotational matrix to get the euler information? I found one source for decomposing the matrix, but I'm not quite getting the correct results. I'm thinking it may be a difference of column/row ordering of my matrices but then again, math isn't my strong point. http://nghiaho.com/?page_id=846

    Read the article

  • Prevent shutdown when rsnapshot is running

    - by highsciguy
    Since shutdowns during rsnapshot operation will lead to inconsistent/partial backups, I wonder how to delay the system shutdown while rsnapshot is active. The task is complicated by the fact that I need a solution which is compatible with non-expert users. I.e. I need to tell reliably to the user that he needs to wait until the process is finished and not to do a hard reset. Once this is the case shutdown should continue. A possible solution could be to replace the action of the window managers (mostly KDE) shutdown/restart/hibernate buttons by a script which first checks if rsync is active and shows a message if this is the case. But I do not know if this is possible in KDE.

    Read the article

  • The Most Important Person Is the One that Keeps Your PC Running [Comic]

    - by The Geek
    Fixing people’s computers usually makes them appreciate you more, though this might be a little too far. Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Taking the Plunge - or Dipping Your Toe - into the Fluffy IAM Cloud by Paul Dhanjal (Simeio Solutions)

    - by Greg Jensen
    In our last three posts, we’ve examined the revolution that’s occurring today in identity and access management (IAM). We looked at the business drivers behind the growth of cloud-based IAM, the shortcomings of the old, last-century IAM models, and the new opportunities that federation, identity hubs and other new cloud capabilities can provide by changing the way you interact with everyone who does business with you. In this, our final post in the series, we’ll cover the key things you, the enterprise architect, should keep in mind when considering moving IAM to the cloud. Invariably, what starts the consideration process is a burning business need: a compliance requirement, security vulnerability or belt-tightening edict. Many on the business side view IAM as the “silver bullet” – and for good reason. You can almost always devise a solution using some aspect of IAM. The most critical question to ask first when using IAM to address the business need is, simply: is my solution complete? Typically, “business” is not focused on the big picture. Understandably, they’re focused instead on the need at hand: Can we be HIPAA compliant in 6 months? Can we tighten our new hire, employee transfer and termination processes? What can we do to prevent another password breach? Can we reduce our service center costs by the end of next quarter? The business may not be focused on the complete set of services offered by IAM but rather a single aspect or two. But it is the job – indeed the duty – of the enterprise architect to ensure that all aspects are being met. It’s like remodeling a house but failing to consider the impact on the foundation, the furnace or the zoning or setback requirements. While the homeowners may not be thinking of such things, the architect, of course, must. At Simeio Solutions, the way we ensure that all aspects are being taken into account – to expose any gaps or weaknesses – is to assess our client’s IAM capabilities against a five-step maturity model ranging from “ad hoc” to “optimized.” The model we use is similar to Capability Maturity Model Integration (CMMI) developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. It’s based upon some simple criteria, which can provide a visual representation of how well our clients fair when evaluated against four core categories: ·         Program Governance ·         Access Management (e.g., Single Sign-On) ·         Identity and Access Governance (e.g., Identity Intelligence) ·         Enterprise Security (e.g., DLP and SIEM) Often our clients believe they have a solution with all the bases covered, but the model exposes the gaps or weaknesses. The gaps are ideal opportunities for the cloud to enter into the conversation. The complete process is straightforward: 1.    Look at the big picture, not just the immediate need – what is our roadmap and how does this solution fit? 2.    Determine where you stand with respect to the four core areas – what are the gaps? 3.    Decide how to cover the gaps – what role can the cloud play? Returning to our home remodeling analogy, at some point, if gaps or weaknesses are discovered when evaluating the complete impact of the proposed remodel – if the existing foundation wouldn’t support the new addition, for example – the owners need to decide if it’s time to move to a new house instead of trying to remodel the old one. However, with IAM it’s not an either-or proposition – i.e., either move to the cloud or fix the existing infrastructure. It’s possible to use new cloud technologies just to cover the gaps. Many of our clients start their migration to the cloud this way, dipping in their toe instead of taking the plunge all at once. Because our cloud services offering is based on the Oracle Identity and Access Management Suite, we can offer a tremendous amount of flexibility in this regard. The Oracle platform is not a collection of point solutions, but rather a complete, integrated, best-of-breed suite. Yet it’s not an all-or-nothing proposition. You can choose just the features and capabilities you need using a pay-as-you-go model, incrementally turning on and off services as needed. Better still, all the other capabilities are there, at the ready, whenever you need them. Spooling up these cloud-only services takes just a fraction of the time it would take a typical organization to deploy internally. SLAs in the cloud may be higher than on premise, too. And by using a suite of software that’s complete and integrated, you can dramatically lower cost and complexity. If your in-house solution cannot be migrated to the cloud, you might consider using hardware appliances such as Simeio’s Cloud Interceptor to extend your enterprise out into the network. You might also consider using Expert Managed Services. Cost is usually the key factor – not just development costs but also operational sustainment costs. Talent or resourcing issues often come into play when thinking about sustaining a program. Expert Managed Services such as those we offer at Simeio can address those concerns head on. In a cloud offering, identity and access services lend to the new paradigms described in my previous posts. Most importantly, it allows us all to focus on what we're meant to do – provide value, lower costs and increase security to our respective organizations. It’s that magic “silver bullet” that business knew you had all along. If you’d like to talk more, you can find us at simeiosolutions.com.

    Read the article

  • Rendering order in an Entity System

    - by Daedalus
    Say I use a basic ES approach, and also inside Systems I hold lists of all entities that Systems are required to process. How do I maintain this list of entities in desired rendering order, i.e. for a dumb 2D RenderingSystem? I saw this discussion, and what they suggest is to do something like Z ordering - what I would probably do is just to store a "layer" int in DrawableComponent and then, inside RenderingSystem, just sort entities by mentioned "layer" whenever the entity list for RenderingSystem changes. They also say we could just delete and recreate the entity whenever we want it on the top, but it seems too inflexible to me. How is this problem usually solved?

    Read the article

  • Looking for Non Hosted Audio & Video Podcasting Solution for Church Websites

    - by motboys
    I am looking for a solution that will do the following: User uploads audio and/or video files with title, desc. image etc Solution embeds info into ID3 tags Solution generates RSS feed Solution embeds new content in our website Content on website is searchable This is for a couple of church websites I manage. I am looking for the ability to do the above with a sermon mp3 and also a video. At the moment we are doing it with multiple steps / people involved and I want to automate the process. I can't seem to find a solution that does all of the above. Thank you!

    Read the article

  • Using RegEx's in Multi-Channel Funnels in Google Analytics

    - by Rob H
    For some reason, I can't get my multi-channel funnel which utilizes RegEx's in the path steps to function -- it keeps coming back with no data. There are a few variables which may be holding things up, but I can't figure out the origin of the problem, nor a solution. Here's the situation: The funnel is tracking conversions, defined as when a user completes 4 steps to signup Steps are not "required" Default URL is set to https://example.com There is a 302 redirect set up on our site that leads from http://example.com to https://example.com Within the funnel, steps switch from non-secure pages (unless browser is set to secure browsing), to secure pages once the user moves from the landing page to the second page of the sign-up process (account placeholder has been created) URL at that point contains the variable of publisher number within (but not at the end) the URL My RegEx's are all properly written as tested on rubular.com

    Read the article

  • Help building pygtk application

    - by umpirsky
    This is my application. It is created with quickly. I would like to package it for Ubuntu now. I tried to package it with uickly, but that failed. At first, I was trying to install it using setup.py. But it is only copied in python lib dir, no icon, no desktop file installed. Then I tried to follow this guide, but I ended up with package without icon and it was of bad quality, but most important of all, it does not use setup.py, and it was pretty complicated. I would like to automate packaging process Can someone point me in the right direction, some examples of existing apps that have automated packaging etc? Thanks in advance.

    Read the article

  • How to create an auto-grader in and for Python

    - by recluze
    I'm trying to create an auto-grader for one of my beginning programming courses for python. From my online search, I've come to know that it is effectively a unit test framework that tests the student's code rather than production code but I'm not really sure how to structure the flow of the program. Can anyone please provide a strategy for submission of code by students and automating the whole process of marking? For instance, how would the student code be submitted and then stored/structured on disk, how would the grades be stored/reported? I'm only looking for a broad strategy and will try on my own to fill in the blanks. (I asked this on stockoverflow.com initially but it's considered as off-topic and I was suggested to ask here.)

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Returning Images from ASP.NET Web API

    - by bipinjoshi
    Sometimes you need to save and retrieve image data in SQL Server as a part of Web API functionality. A common approach is to save images as physical image files on the web server and then store the image URL in a SQL Server database. However, at times you need to store image data directly into a SQL Server database rather than the image URL. While dealing with the later scenario you need to read images from a database and then return this image data from your Web API. This article shows the steps involved in this process. http://www.bipinjoshi.net/articles/4b9922c3-0982-4e8f-812c-488ff4dbd507.aspx

    Read the article

  • Can't install gimp-plugin-registry

    - by Uri Herrera
    I tried to install the Ubuntu Studio Graphics meta-package, however it didn't install correctly.The package gimp-plugin-registry just won`t install, i tried the one in the Software center and the one on the WebUp8 PPA neither package works. The following NEW packages will be installed: gimp-plugin-registry 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/1395kB of archives. After this operation, 3592kB of additional disk space will be used. (Reading database ... 402557 files and directories currently installed.) Unpacking gimp-plugin-registry (from .../gimp-plugin-registry_3.2-1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/gimp-plugin-registry_3.2-1_i386.deb (- -unpack): trying to overwrite '/usr/lib/gimp/2.0/plug-ins/file-xmc', which is also in package gimp 2.7.3-2010110501~mm dpkg-deb: subprocess paste killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gimp-plugin-registry_3.2-1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Unity Launcher and Menu Bar disappeared in 14.04

    - by Justice2497
    The menu bar and Unity launcher have disappeared in Ubuntu 14.04. Also cannot access the terminal via ctrl + alt + t. I have tried every solution for this issue that I can find, from re-installing untiy and compiz to resetting Unity and everything. I've gone into the compiz manager and reactived Unity. Nothing has worked. When trying to reset Unity, or do anything with it really, I get this message (process:6332): dconf-WARNING **: failed to commit changes to dconf: Could not connect: Connection refused EDIT: Installed 14.04 a day ago. The issue happened after a reboot. After installing virtual Box

    Read the article

  • Do PHP-FPM (and other PHP handlers) need execute permissions on the PHP files they're serving?

    - by Andrew Cheong
    I read in a post at Server Fault that PHP-FPM needs execute permissions. However, the answer in When creating a website, what permissions and directory structure? only grants read and write permissions to PHP-FPM. Maybe I don't quite understand how PHP handlers (or CGI in general) work, but the two claims seem contradictory to me. As I understand, when Apache / Nginx gets a request for foobar.php, it "passes" the file to an appropriate handler. That is, I imagine it's as if www-root (or apache or whomever the webserver's running as) were to run some command, /usr/sbin/php-fpm foobar.php Actually, no, that's naive, I just realized. PHP-FPM must be a running instance (if it's to be performant, and cache, etc.), so probably PHP-FPM is just being told, "Hey, quick, process this file for me!" In either case, I don't see why execute permissions are necessary. It's not like the webserver needs to literally execute the file, i.e. ./foobar.php Is the Server Fault answer simply mistaken?

    Read the article

  • A bacon- (and module-) saving PowerShell incident

    - by AaronBertrand
    Earlier today I made a big goof. I opened a module in Notepad, intending to use it as the basis for a new module. I was in the process of using "File > Save As" when my phone rang just at the precise instant that, for some reason, made me click on "File > Save" by mistake. After hitting Ctrl+Z 30 times to try to get the old version of the module back, I remembered that Notepad has never had more than one level of Undo. Back when I was coding ASP by hand, I was very well aware of this, but I...(read more)

    Read the article

  • Webinar: Riding the Fence or Planning the Upgrade to 11gR2?

    - by Greg Jensen
     Is your organization riding the Identity and Access fence where you can't decide if you are ready to upgrade?  Are you unsure what the technical and business value gains are, in upgrading to Oracle's 11gR2?  Or are you planning for the upgrade and just unsure of what to expect? In this webinar, experts from Oracle and AmerIndia will discuss the new features of 11gR2, latest market trends, and how IAM transforms organizations. In addition, planning and implementation strategy of the upgrade process will be discussed. The presenters will also share success stories and highlight challenges faced by organizations belonging to different verticals and how Oracle’s solutions and AmerIndia’s services addressed those challenges. Topics include: Market trends and 11gR2 Planning an upgrade Approach and Implementation Strategy Success stories Registration is now open for this Webinar for December 5th from 2pm - 3pm EST. https://blogs.oracle.com/OracleIDM/resource/amerindia-logo.png

    Read the article

  • Game engine like Unity 3D that allow me to use .NET code

    - by Pking
    I've been looking at Unity 3D for developing a 3D PC game and I really like the scene editor and how it simplifies the process of constructing 3D scenes, managing assets, animations, transitions etc. However, I don't want to restrict myself to using the Unity 3D scripts for handling every bit of game logic in the game. E.g. If I want to construct a RPG dialogue system I don't want to do it with unity 3d scripts - I'd like to use C#/.net. Also, I might want to use e.g. windows azure and sql azure as backend, and use 3rd party .net libraries such as reactive-extensions etc. Is there a .net engine out there that helps me with asset loading, animations, physics, transitions, etc. with a scene editor, but allow me to plug it into a visual studio .net project? Thanks

    Read the article

  • Testing To Prevent Cascading Bugs

    - by jfrankcarr
    Yesterday, Twitter was hit with a "Cascading Bug" as described in this blog post: A “cascading bug” is a bug with an effect that isn’t confined to a particular software element, but rather its effect “cascades” into other elements as well. I've seen this kind of bug, on a smaller scale of course, on some projects I've worked on. They can be difficult to identify in dev/test environments, even within a test driven development environment. My questions are... What are some strategies you use, beyond the basic TDD and standard regression testing, to identify and prevent the potential trouble points that might only occur in the production environment? Does the presence of such problems indicate a breakdown in the software development process or simply a by-product of complex software systems?

    Read the article

  • VS2010 crashes when opening a vsp generated using VS 2012

    - by Tarun Arora
    I recently profiled some web applications using Visual Studio 2012, a vsp (Visual Studio Profile) file was generated as a result of the profiling session. I could successfully open the vsp file in Visual Studio 2012 as expected but when I tried to open the vsp file in Visual Studio 2010 the VS2010 IDE crashed. As a responsible citizen I raised bug # 762202 on Microsoft Connect site using the Microsoft Visual Studio 2012 Feedback Client. Note – In case you didn’t already know, VSP generated in Visual Studio 2012 is not backward compatible. Please refer below for the steps to reproduce the issue and the resolution of the connect bug. 1. Behaviour and Steps to Reproduce the Issue Description I have generated a vsp file by using the Visual Studio 2012 Standalone profiler. When I try and open the vsp file in Visual Studio 2010 the IDE crashes. I understand that a vsp generated by using VS 2012 cannot be opened in VS 2010, but the IDE crashing is not the behaviour I would expect to see. Steps to Reproduce the Issue 1. Pick up the Stand lone profiler from the VS 2012 installation media. The folder has both x 64 and x86 installer, since the machine I am using is x64 bit. I have installed the x64 version of the standalone profiler. 2. I have configured the system path by setting the 'environment variable' path to where the profiler is installed. In my case this is, C:\Program Files (x86)\Microsoft Visual Studio 11.0\Team Tools\Performance Tools 3. Created a new environment variable _NT_SYMBOL_PATH and set its value to CACHE*C:\SYMBOLSCACHE;SRV*C:\SYMBOLSCACHE*HTTP://MSDL.MICROSOFT.COM/DOWNLOAD/SYMBOLS;\\FOO\BUILD1234 4. Open up CMD as an administrator and run 'VSPerfASPNETCmd /tip http://localhost:56180/ /o:C:\Temp\SampleEISK.vsp' 5. This generates the following message on the cmd       Microsoft (R) VSPerf ASP.NET Command, Version 11.0.0.0     Copyright (C) Microsoft Corporation. All rights reserved.     Configuring and attaching to ASP.NET process. Please wait.     Setting up profiling environment.     Starting monitor.     Launching ASP.NET service.     Attaching Monitor to process.     Launching Internet Explorer.     The profiler is attached to ASP.net. Please run your application scenario now.     Press Enter to stop data collection...   6. I perform certain actions and then I come back to the cmd and hit enter to shut down the profiling. Once I do this, the following message is written to the cmd, Press Enter to stop data collection... Profiling now shut down. Report file "C:\Temp\SampleEISK.vsp" was generated. Running VsPerfReport, packing symbols into the .VSP. Shutting down profiling and restarting ASP.NET. Please wait. Restarting w3wp.exe.   7. I look in the C:\Temp folder and I can see the SampleEISK.vsp file generated. I can successfully open this file in Visual Studio 2012. 8. When I am trying to open the vsp file in VS 2010 the VS 2010 IDE crashes. Kaboooom! What I would expect to happen I expect to receive a message "VS 2010 does not support the vsp file generated by VS 2012". What actually happened The VS 2010 IDE crashed 2. Resolution This is a valid bug! However, there isn’t much value in releasing a hotfix for this issue. Refer below to the resolution provided by the Visual Studio Profiler Team.  Thank you for taking the time to report this issue. We completely agree that Visual Studio 2010 should not crash. However in this particular case this is not a bug we are going to retroactively release a fix to 2010 for at this point. Given that a fix would not unblock the scenario of opening a 2012 created file on Visual Studio 2010, and there is not an active update channel for Visual Studio 2010 other than manually locating and installing hot fixes, we will not be fixing this particular issue. Best Regards, Visual Studio Profiler Team   Though it would be great to improve the behaviour however, this is not a defect that would stop you from progressing in any way. It’s important to note however that VSP files generated by Visual Studio 2012 are not backward compatible so you should refrain from opening these files in Visual Studio 2010.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >