Search Results

Search found 65464 results on 2619 pages for 'web based backup'.

Page 133/2619 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • How can I get HTML to link to a browser (or system) specified URL?

    - by MrHatken
    Hi All, I'd like to be able to create a "HTML link" that the user can click on and be taken to an URL (location) specified either in the browser (preferences?) or system environment. Is this possible? Any suggestions on how to do it please? For example, it may look something like this (or alternatively it could be a clickable image or even a submit button): "Click here to go to your preferred news site." When the user clicks on "here" the browser would go to a location specified not in the HTML but somehow in the browser (preferences?) or some system environment variable (OS specific etc.) Of course, the user would have to set up this preference or environment variable (or have some local application or better Web page that could set it - when approved by the user). This is sort of like most OS these days allow you to set "preferred app" for image processing or playing media. I would like to set preferred Web sites for certain tasks. Thanks for any suggestions. Hopefully with Javascript and modern browsers and perhaps HTML 5 something like this is possible. Cheers, Ashley.

    Read the article

  • ASP.NET Web Optimization - confusion about loading order

    - by Ciel
    Using the ASP.NET Web Optimization Framework, I am attempting to load some javascript files up. It works fine, except I am running into a peculiar situation with either the loading order, the loading speed, or its execution. I cannot figure out which. Basically, I am using ace code editor for javascript, and I also want to include its autocompletion package. This requires two files. /ace.js /ext-language_tools.js This isn't an issue, if I load both of these files the normal way (with <script> tags) it works fine. But when I try to use the web optimization bundles, it seems as if something goes wrong. Trying this out... bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") .Include("~/js/ext-language_tools.js") }); and then in the view .. @Scripts.Render("~/bundles/js") I get the error ace is not defined This means that the ace.js file hasn't run, or hasn't loaded. Because if I break it apart into two bundles, it starts working. bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") }); bundles.Add(new ScriptBundle("~/bundles/js/language_tools") { .Include("~/js/ext-language_tools.js") }); Can anyone explain why this would behave in this fashion?

    Read the article

  • ASP.NET Web Service returning XML result and nodevalue is always null

    - by kburnsmt
    I have an ASP.NET web service which returns an XMLDocument. The web service is called from a Firefox extension using XMLHttpRequest. var serviceRequest = new XMLHttpRequest(); serviecRequest.setRequestHeader("Content-Type", "text/xml; charset=utf-8"); I consume the result using responseXML. So far so good. But when I iterate through the XML I retrieve nodeValue - nodeValue is always null. When I check the nodeType the nodeType is type 1 (Node.ELEMENT_NODE == 1). Node.NodeValue states all nodes of type Element will return null. In my webservice I have created a string with the XML i.e. xml="Hank" I then create the XmlDocument XmlDocument doc = new XmlDocument(); doc.LoadXML(string); I know I can specify the nodetype using using CreateNode. But when I am just building the xml by appending string values is there a way to change the nodeType to Text so Node.nodeValue will be "content of the text node".

    Read the article

  • Consuming a HTTPS Web Service in .Net 3.5 Web Project

    - by Chris M
    I'm trying to consume a webservice that ONLY runs on HTTPS but using the "add service" method in VS or using the WSDL to generate a code file leaves me with a web service that states its http... <wsdl:service name="OGServ"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">XML Web Services element of OGServ Gateway</wsdl:documentation> <wsdl:port name="OGServSoap" binding="tns:OGServSoap"> <soap:address location="http://ogserv.domain.co.uk/ogwsrv/og.asmx" /> </wsdl:port> <wsdl:port name="OGServSoap12" binding="tns:OGServSoap12"> <soap12:address location="http://ogserv.domain.co.uk/ogwsrv/og.asmx" /> </wsdl:port> </wsdl:service> Would this be the reason that even when I change the app.config (generated by the add-service) endpoint address to https it says it was expecting HTTP? The error: EC.Tests.OGGatewayLayerTest (TestFixtureSetUp): System.ArgumentException : The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via

    Read the article

  • Remove php extension from URL on Windows hosting account using web.config

    - by asprin
    I've searched before asking this question. The answered ones were related to Linux hosting account and the ones with Windows hosting account didn't match what I was looking for. As you might have guessed, I've a Windows shared hosting account with godaddy. My aim was to remove the '.php' extension from the url. After researching I found that .htaccess would do exactly what I want. But I also found that .htaccess doesn't work in Windows environment and that I'll need a web.config file to do the same task. Now I know there are modules through which the code can be generated, but the problem is I don't know how to get them installed on my hosting account. I don't want to go through the process of contacting the people over at godaddy and hence I'm looking to solve this on my own. What I'm looking for is a web.config equivalent of .htaccess This is what I'm trying to achieve: Current URL : www.abcdef.com/contact.php Desired URL : www.abcdef.com/contact Any help would be greatly appreciated. Thanks, Nisar.

    Read the article

  • Suitable web framework for the following scenario

    - by Paralife
    I have the following scenario: I have a view in an Oracle server and all Iwant is to show that view in a web browser, along with an input field or two for basic filtering. No users, no authentication, just this view maybe with a column or two linking to a second page for master detail viewing. The children are just string descriptions of the columns of the master that contain IDs. No inserts or updates. The question is which is the JAVA based web framework of choice that can accomplish the above in the minimum amount of code lines code time(subjective but also kind of objective if someone has expirience with more than one or two frameworks) configuration effort deployment effort and requirements. dependencies and mem footprint Also: 6. Oracle APEX is not an option. 3,4 and 5 are maybe the same in the sense that they are everything except the functionality coding. I want something that I can compile, deploy by just FTPing to the database host, run and forget. (e.g. For the deployment aspect, Hudson way comes in mind (java -jar hudson.war and that's all)). Also: 3,4 have priority over 1 and 2. (Explanation with a rant: I dont mind coding a lot as long as it is application code and not "why the fuck do we still use javascript over http for everything" code) Thanks.

    Read the article

  • How to get an error message out of my webpage

    - by Vaccano
    I have a web page that when I run it on a remote computer I get the message saying that remote errors cannot be viewed When I go to view it on my web server machine, I get a message saying: Internet Explorer cannot display the webpage • Most likely causes: • You are not connected to the Internet. • The website is encountering problems. • There might be a typing error in the address. What you can try: Check your Internet connection. Try visiting another website to make sure you are connected. Retype the address. Go back to the previous page. I can get to google fine, so it is not the internet connection.... But this message gives me nothing to work on. How can I get more info as to why my page is not working? I tried going to IIS Mananger and right clicking on the site and selecting browse. But my site is an HTTPS site so that does not work. Any ideas would be great.

    Read the article

  • NHibernate / ORM - Child Update over Web Service

    - by tyndall
    What is the correct way to UPDATE a child object with NHibernate but not have to "awake" the parent object. Lets say you would like to try to avoid this because the parent object is large or expensive to initiate. Lets assume classes are called Author(parent) and Book(child). (still, trying to avoid instantiating Author) Book comes back over a web service as XML. It gets deserialized back into a CLR object. Book has an AuthorId property which allows this to happen. But it also has a Author property. Problem, comes when you try to SaveOrUpdate() Book and the author_id in the database gets wiped out because the Author was null when the object gets deserialized. This seems like this would be a common problem. What is the workaround? Also, if you instantiate the Author and it has a Books property. The book you are trying to update is already one of these books (List<Book>). We have also run into the "a different object with the same identifier value was already associated with the session" problems. What is the standard process to update a child over a web service?

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Installing AdventureWorks for SQL Server 2011

    - by pinaldave
    I just began with SQL Server 2011 Denali CTP1. The very first thing, I realized that there is no AdventureWorks Sample Database available for Denali. I quickly searched online and reached to Microsoft documentations where it provides information of the how to install (restore) AdventureWorks for SQL Server 2011 for Denali. Download the AdventureWorks from here. Run following script (replace your path of mdf file. CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ; When you run above script it will give you following message and you are DONE! File activation failure. The physical file name "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Log.ldf" may be incorrect. New log file 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_log.ldf' was created. Converting database 'AdventureWorks2008R2' from version 679 to the current version 684. Database 'AdventureWorks2008R2' running the upgrade step from version 679 to version 680. Database 'AdventureWorks2008R2' running the upgrade step from version 680 to version 681. Database 'AdventureWorks2008R2' running the upgrade step from version 681 to version 682. Database 'AdventureWorks2008R2' running the upgrade step from version 682 to version 683. Database 'AdventureWorks2008R2' running the upgrade step from version 683 to version 684. I will soon write my experience about Denali. However, SQL Server Management Studio more started to look a like Visual Studio. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • rsync'd a folder, folder doesn't show up, but free disk space decreased

    - by Patrick
    I am currently trying to switch from mac to windows/ubuntu dual boot (on 2 seperate internal HDDs), but ran into some trouble restoring my documents. I am not sure all the information below is necessary, but if I knew how to solve it, I wouldn't ask it here. I backed up my mac before buying this laptop on an external HDD with Carbon Copy Cloner. I wanted to put these files on my user folder on my windows HDD, but I could not do that from inside windows (HFS+ format of mac), so I used rsync from inside Ubuntu to copy the documents from the ext hdd to the windows partition. It seemed like it went okay, but from inside windows (and later also Ubuntu) the folder didn't show up. My free HDD space, however, has reduced with about 200 GB (the size of the backup) when looking at the disk properties (from inside Windows and Ubuntu). rsync command I used: rsync -av /media/patrick/Toshiba\ 1.5T/Users/patrickvandenberg/ /media/patrick/Windows8_OS/Users/Patrick/MacBackup/ Folder does not exist: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ cd /media/patrick/Windows8_OS/Users/Patrick/MacBackup bash: cd: /media/patrick/Windows8_OS/Users/Patrick/MacBackup: No such file or directory Size of disk: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ du -hs /media/patrick/Windows8_OS/ 195G /media/patrick/Windows8_OS/ Size of disk according to Disk properties: http://i.stack.imgur.com/OteMX.png (not enough rep to insert the image)

    Read the article

  • Keeping multiple root directories in a single partition

    - by intuited
    I'm working out a partition scheme for a new install. I'd like to keep the root filesystem fairly small and static, so that I can use LVM snapshots to do backups without having to allocate a ton of space for the snapshot. However, I'd also like to keep the number of total partitions small. Even with LVM, there's inevitably some wasted space and it's still annoying and vaguely dangerous to allocate more. So there seem to be a couple of different options: Have the partition that will contain bulky, variable files, like /srv, /var, and /home, be the root partition, and arrange for the core system state — /etc, /usr, /lib, etc. — to live in a second partition. These files can (I think) be backed up using a different backup scheme, and I don't think LVM snapshots will be necessary for them. The opposite: putting the big variable directories on the second partition, and having the essential system directories live on the root FS. Either of these options require that certain directories be pointers of some variety to subdirectories of a second partition. I'm aware of two different ways to do this: symlinks and bind-mounts. Is one better than the other for this purpose? Is there another option? Do any of the various Ubuntu installation media/strategies support this style of partition layout?

    Read the article

  • How do i make my existing ubuntu in a bootable installation CD? I tried remastersys but fails with 11.10

    - by YumYumYum
    I need to install 10 PC which has identical setup and hardware. So i was trying remastersys but its failing. How can i resolve this or use something else to achieve this? Updating the remastersys.log cat: /home/remastersys/remastersys/tmpusers: No such file or directory Cleaning up the install icon from the user desktops Removing the ubiquity frontend as it has been included and is not needed on the normal system Calculating the installed filesystem size for the installer Removing remastersys-firstboot from system startup Removing any system startup links for /etc/init.d/remastersys-firstboot ... /etc/rc0.d/K20remastersys-firstboot /etc/rc1.d/K20remastersys-firstboot /etc/rc2.d/S20remastersys-firstboot /etc/rc3.d/S20remastersys-firstboot /etc/rc4.d/S20remastersys-firstboot /etc/rc5.d/S20remastersys-firstboot /etc/rc6.d/K20remastersys-firstboot Making disk compatible with Ubuntu Startup Disk Creator. Creating md5sum.txt for the livecd/dvd Creating /var/tmp/custom.iso in /home/remastersys/remastersys The iso was not created. There was a problem. Exiting Follow up: 1) I am unhappy that there is nothing exist for this to recover/backup 11.10 2) Anyway i have to do it 3) I did not used the popular Clonezilla because it does not offer me iso 4) I downloaded: http://clonezilla-sysresccd.hellug.gr/download.html a) created a bootable CD from that ISO b) booted and followed those steps http://clonezilla-sysresccd.hellug.gr/restore.html c) i got a iso file with everyhing on it including boot-loaders 5) then in another system i used my same CD to restore my image Perfectly worked.

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Grid pathfinding with a lot of entities

    - by Vee
    I'd like to explain this problem with a screenshot from a released game, DROD: Gunthro's Epic Blunder, by Caravel Games. The game is turn-based and tile-based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth-first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth-first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth-first search. The game is open source, and I took a look at the source code, but it's C++ (I'm using C#) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums: Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room: One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy/friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.

    Read the article

  • How to read default key value with dconf or gsettings?

    - by Zta
    I would like to know the default value of a dconf/gsettings key. My question is a followup of the question below: Where can I get a list of SCHEMA / PATH / KEY to use with gsettings? What I'm trying to do, so create a script that reads all my personal preferences so I can back them up and restore them. I plan to iterate though all keys, like the script above, see what keys have been changed from their default value, and make a note of these, that can be restored later. I see that the dconf-editor display the keys' default value, but I'd very much like to script this. Also, I don't see how parsing the schemas /usr/share/glib-2.0/schemas/ can be automated. Maybe someone can help? gsettings get-default|list-defaults would be nice =) (Geesh, it was much easier in the old days where you just kept your ~/.somethingrc in subversion ... =\ Based on the answer given below, I've updated the script to print schema, key, key's data type, default value, and actual value: #!/bin/bash for schema in $(gsettings list-schemas | sort); do for key in $(gsettings list-keys $schema | sort); do type="$(gsettings range $schema $key | tr "\n" " ")" default="$(XDG_CONFIG_HOME=/tmp/ gsettings get $schema $key | tr "\n" " ")" value="$(gsettings get $schema $key | tr "\n" " ")" echo "$schema :: $key :: $type :: $default :: $value" done done This workaround basically covers what I need. I'll continue working on the backup scrip from here.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • rotate player based off of joystick

    - by pengume
    Hey everyone I have this game that i am making in android and I have a touch screen joystick that moves the player around based on the joysticks position. I cant figure out how to also get the player to rotate at the same angle of the joystick. so when the joystick is to the left the players bitmap is rotated to the left as well. Maybe someone here has some sample code I could look at here is the joysticks class that I am using. `public class GameControls implements OnTouchListener { public float initx = DroidzActivity.screenWidth - 45; //255; // 320 og 425 public float inity = DroidzActivity.screenHeight - 45;//425; // 480 og 267 public Point _touchingPoint = new Point( DroidzActivity.screenWidth - 45, DroidzActivity.screenHeight - 45); public Point _pointerPosition = new Point(DroidzActivity.screenWidth - 100, DroidzActivity.screenHeight - 100); // ogx 220 ogy 150 private Boolean _dragging = false; private boolean attackMode = false; @Override public boolean onTouch(View v, MotionEvent event) { update(event); return true; } private MotionEvent lastEvent; public boolean ControlDragged; private static double angle; public void update(MotionEvent event) { if (event == null && lastEvent == null) { return; } else if (event == null && lastEvent != null) { event = lastEvent; } else { lastEvent = event; } // drag drop if (event.getAction() == MotionEvent.ACTION_DOWN) { if ((int) event.getX() > 0 && (int) event.getX() < 50 && (int) event.getY() > DroidzActivity.screenHeight - 160 && (int) event.getY() < DroidzActivity.screenHeight - 0) { setAttackMode(true); } else { _dragging = true; } } else if (event.getAction() == MotionEvent.ACTION_UP) { if(isAttackMode()){ setAttackMode(false); } _dragging = false; } if (_dragging) { ControlDragged = true; // get the pos _touchingPoint.x = (int) event.getX(); _touchingPoint.y = (int) event.getY(); // Log.d("GameControls", "x = " + _touchingPoint.x + " y = " //+ _touchingPoint.y); // bound to a box if (_touchingPoint.x < DroidzActivity.screenWidth - 75) { // og 400 _touchingPoint.x = DroidzActivity.screenWidth - 75; } if (_touchingPoint.x > DroidzActivity.screenWidth - 15) {// og 450 _touchingPoint.x = DroidzActivity.screenWidth - 15; } if (_touchingPoint.y < DroidzActivity.screenHeight - 75) {// og 240 _touchingPoint.y = DroidzActivity.screenHeight - 75; } if (_touchingPoint.y > DroidzActivity.screenHeight - 15) {// og 290 _touchingPoint.y = DroidzActivity.screenHeight - 15; } // get the angle setAngle(Math.atan2(_touchingPoint.y - inity, _touchingPoint.x - initx) / (Math.PI / 180)); // Move the ninja in proportion to how far // the joystick is dragged from its center _pointerPosition.y += Math.sin(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // og 180 70 _pointerPosition.x += Math.cos(getAngle() * (Math.PI / 180)) * (_touchingPoint.x / 70); // make the pointer go thru if (_pointerPosition.x > DroidzActivity.screenWidth) { _pointerPosition.x = 0; } if (_pointerPosition.x < 0) { _pointerPosition.x = DroidzActivity.screenWidth; } if (_pointerPosition.y > DroidzActivity.screenHeight) { _pointerPosition.y = 0; } if (_pointerPosition.y < 0) { _pointerPosition.y = DroidzActivity.screenHeight; } } else if (!_dragging) { ControlDragged = false; // Snap back to center when the joystick is released _touchingPoint.x = (int) initx; _touchingPoint.y = (int) inity; // shaft.alpha = 0; } } public void setAttackMode(boolean attackMode) { this.attackMode = attackMode; } public boolean isAttackMode() { return attackMode; } public void setAngle(double angle) { this.angle = angle; } public static double getAngle() { return angle; } }` I should also note that the player has animations based on when he is moving or attacking. EDIT: I got the angle and am rotating the sprite around in the correct angle however it rotates on the wrong spot. My sprite is one giant bitmap that gets cut into four pieces and only one shown at a time to animate walking. here is the code I am using to rotate him right now. ` public void draw(Canvas canvas,int pointerX, int pointerY) { Matrix m; if (setRotation){ // canvas.save(); m = new Matrix(); m.reset(); // spriteWidth and spriteHeight are for just the current frame showed //m.setTranslate(spriteWidth / 2, spriteHeight / 2); //get and set rotation for ninja based off of joystick m.preRotate((float) GameControls.getRotation()); //create the rotated bitmap flipedSprite = Bitmap.createBitmap(bitmap , 0, 0,bitmap.getWidth(),bitmap.getHeight() , m, true); //set new bitmap to rotated ninja setBitmap(flipedSprite); setRotation = false; // canvas.restore(); Log.d("Ninja View", "angle of rotation= " +(float) GameControls.getRotation()); } ` And then the draw method // create the destination rectangle for the ninjas current animation frame // pointerX and pointerY are from the joystick moving the ninja around destRect = new Rect(pointerX, pointerY, pointerX + spriteWidth, pointerY + spriteHeight); canvas.drawBitmap(bitmap, getSourceRect(), destRect, null);

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Expression Web 4 - Master Page Error

    - by Eric J.
    I created an ASP.Net Web Application in VS 2010. That in turn creates an example Site.Master, Default.aspx, and several other example files. I then opened Default.aspx in Expression Web 4 and get the error message The Master Page file 'Site.Master' cannot be loaded. Default.aspx can still be displayed fine in VS 2010. Source.Master: <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.master.cs" Inherits="SampleWebApp.SiteMaster" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head runat="server"> <title></title> <link href="~/Styles/Site.css" rel="stylesheet" type="text/css" /> <asp:ContentPlaceHolder ID="HeadContent" runat="server"> </asp:ContentPlaceHolder> <style type="text/css"> .style1 { font-family: Tunga; } </style> </head> <body> <form runat="server"> <div class="page"> <div class="header"> <div class="title"> <h1 class="style1"> My Application Master Page</h1> </div> <div class="loginDisplay"> <asp:LoginView ID="HeadLoginView" runat="server" EnableViewState="false"> <AnonymousTemplate> [ <a href="~/Account/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> ] </AnonymousTemplate> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/"/> ] </LoggedInTemplate> </asp:LoginView> </div> <div class="clear hideSkiplink"> <asp:Menu ID="NavigationMenu" runat="server" CssClass="menu" EnableViewState="false" IncludeStyleBlock="false" Orientation="Horizontal"> <Items> <asp:MenuItem NavigateUrl="~/Default.aspx" Text="Home"> <asp:MenuItem NavigateUrl="~/Home/NewItem.aspx" Text="New Item" Value="New Item"></asp:MenuItem> <asp:MenuItem NavigateUrl="~/Home/AnotherItem.aspx" Text="Another Item" Value="Another Item"></asp:MenuItem> </asp:MenuItem> <asp:MenuItem NavigateUrl="~/About.aspx" Text="About"/> <asp:MenuItem NavigateUrl="~/ContactUs.aspx" Text="ContactUs" Value="ContactUs"> </asp:MenuItem> </Items> </asp:Menu> </div> </div> <div class="main"> <asp:ContentPlaceHolder ID="MainContent" runat="server"/> </div> <div class="clear"> </div> </div> <div class="footer"> </div> </form> </body> </html> Default.aspx: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="SampleWebApp._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> <style type="text/css"> .style2 { color: #669900; } .style3 { background-color: #FFFFCC; } </style> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Welcome to MY page! </h2> <p> To learn more <span class="style2"><strong><em><span class="style3">about</span></em></strong></span> ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. </p> <p> You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. </p> </asp:Content> Any idea how to get the master page to work properly in Expression Web 4?

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • Suggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssure

    - by Anon
    We have 2 Hyper-V hosts at present running 1 virtual server that was converted from a physical box running all roles. My plan is to split the roles over various virtual machines, upgrading to the latest software versions as I go, and use the backup server as a standby in case the main server fails. AppAssure backup software has a feature called Virtual Standby, so the VHD's can be ready to be fired up on the backup server if necessary. Off-site backups will be done via external USB drive for now. I'm just seeking some input/suggestions into how I'm planning to split the roles out amongst various virtual servers. Also, I'm curious how to setup the storage on the servers. We do not have any NAS's, SAN'S or any budget for this. What would the best RAID level be to use? I'm thinking either RAID6 (which is currently used) however I'm concerned about the write speeds, or RAID10 but again I'm worried that I can only lose 1 drive (from the same mirror) as opposed to any 2 with RAID6. I realise I have a hot swap for this, but what if a further drive fails during a rebuild? Is the write penalty of RAID6 worth the extra reliability over RAID10? Or will it be too slow with all the roles I am planning, therefore RAID10 is my only real option? The reason for the needed redundancy is I am the only technician and I'm not always on-site. Options I've considered: 1) 5 drives in RAID6 set, 200gb for host OS, rest for VM storage. 1 drive for hot swap - this is how it is currently setup 2) 4 drives in RAID10 set, 200gb for host OS, rest for VM storage. 2 drives for hot swap 3) 4 drives in RAID10 set for VM storage, 2 drives in RAID1 set for host OS. No drives for hot swap - While this is probably the best option with the amount of drives I have, I don't like the idea of having no hot swap 4) 3 drives in RAID6 set for VM storage, 2 drives in RAID1 set for host OS. 1 drive for hot swap All options give us enough storage capacity for our files, etc. We don't have any budget for extra drives or extra hot swap HD chassis for the servers. We have about 70 clients and about 150 users. MAIN SERVER Intel Xeon 5520 @ 2.27 GHz (2 processors) 16GB RAM 6 x 1TB Seagate Barracuda ES.2 Enterprise SATA drives Intel SRCSATAWB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: DC01 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM DC02 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM Member Server - DHCP server, File server, Print server - 1GB RAM SCCM Member Server - 4GB RAM Third Party Software Member Server - A/V server, Ticketing software, etc - 4GB RAM Exchange 2007 - 4GB RAM - however we are probably migrating to a hosted solution, therefore freeing up resources BACKUP SERVER Intel Xeon E5410 @ 2.33GHz (2 processors) 16GB RAM 6 x 2TB WD RE4 SATA drives Intel SRCSASRB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: AppAssure backup software - 8GB RAM

    Read the article

  • Is there a simple automatic backup system for Visual Studio projects?

    - by Jelly Amma
    Hello, I'm using Visual Studio 2008 Express and I would like Visual Studio (or perhaps an Add-in) to save my whole project to some sort of auto-incrementing archive or whatever would help me recover from disasters. I don't have much need for SVN or complex versioning systems. I'm just looking for something simple and lean. Any help would be much appreciated. Jenny PS : I looked into the built-in AutoRecover feature but it doesn't seem to save more than a few files.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >