Search Results

Search found 46073 results on 1843 pages for 'rating system'.

Page 281/1843 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • How to configure 2 lan cards?

    - by Gurupal singh
    I have Ubuntu 14.04 installed on my system and i make it as a server in my office with having 15 employees working.I have 2 Lan cards in my system, one for input from router to my ubuntu server and other from my system to switch, which connects all my employees from through that switch via lan cables.Now, how can i configure my both LAN cards, as i wants to block some sites and make some restriction on the network. Please help me out. It's really a big problem for me. Thanks.

    Read the article

  • Can using higher rated battery damage device?

    - by Anwar Shah
    I am in need of a battery for my Lenovo 3000 Y410 Laptop. It's installed battery has voltage rating of 10.8v. But the shop i've consulted has only a battery of 11.1v rating. Should I buy the battery? I mean, Does using that battery can damage my laptop? An answer with some good source from hardware vendor will be very much helpful. Please do not just put your opinion. Because I do not see any difference in your opinion and my opinion. My opinion is also "It is bad". But I need some reliable source.

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • Passing javascript array of objects to WebService

    - by Yousef_Jadallah
    Hi folks. In the topic I will illustrate how to pass array of objects to WebService and how to deal with it in your WebService.   Suppose we have this javascript code :   <script language="javascript" type="text/javascript"> var people = new Array(); function person(playerID, playerName, playerPPD) { this.PlayerID = playerID; this.PlayerName = playerName; this.PlayerPPD = parseFloat(playerPPD); } function saveSignup() { addSomeSampleInfo(); WebService.SaveSignups(people, SucceededCallback); } function SucceededCallback(result, eventArgs) { var RsltElem = document.getElementById("divStatusMessage"); RsltElem.innerHTML = result; } function OnError(error) { alert("Service Error: " + error.get_message()); } function addSomeSampleInfo() { people = new Array(); people[people.length++] = new person(123, "Person 1 Name", 10); people[people.length++] = new person(234, "Person 2 Name", 20); people[people.length++] = new person(345, "Person 3 Name", 10.5); } </script> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } poeple :is the array that we want to send to the WebService. person :The function –constructor- that we are using to create object to our array. SucceededCallback : This is the callback function invoked if the Web service succeeded. OnError : this is the Error callback function so any errors that occur when the Web Service is called will trigger this function. saveSignup : This function used to call the WebSercie Method (SaveSignups), the first parameter that we pass to the WebService and the second is the name of the callback function.   Here is the body of the Page : <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> <Services> <asp:ServiceReference Path="WebService.asmx" /> </Services> </asp:ScriptManager> <input type="button" id="btn1" onclick="saveSignup()" value="Click" /> <div id="divStatusMessage"> </div> </form> </body> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }     Then main thing is the ServiceReference and it’s path "WebService.asmx” , this is the Web Service that we are using in this example.     A web service will be used to receive the javascript array and handle it in our code : using System; using System.Web; using System.Web.Services; using System.Xml; using System.Web.Services.Protocols; using System.Web.Script.Services; using System.Data.SqlClient; using System.Collections.Generic; [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ScriptService] public class WebService : System.Web.Services.WebService { [WebMethod] public string SaveSignups(object [] values) { string strOutput=""; string PlayerID="", PlayerName="", PlayerPPD=""; foreach (object value in values) { Dictionary<string, object> dicValues = new Dictionary<string, object>(); dicValues = (Dictionary<string, object>)value; PlayerID = dicValues["PlayerID"].ToString(); PlayerName = dicValues["PlayerName"].ToString(); PlayerPPD = dicValues["PlayerPPD"].ToString(); strOutput += "PlayerID = " + PlayerID + ", PlayerName=" + PlayerName + ",PlayerPPD= " + PlayerPPD +"<br>"; } return strOutput; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The first thing I implement System.Collections.Generic Namespace, we need it to use the Dictionary Class. you can find in this code that I pass the javascript objects to array of object called values, then we need to deal with every separate Object and explicit it to Dictionary<string, object> . The Dictionary Represents a collection of keys and values Dictionary<TKey, TValue> TKey : The type of the keys in the dictionary TValue : The type of the values in the dictionary. For more information about Dictionary check this link : http://msdn.microsoft.com/en-us/library/xfhwa508(VS.80).aspx   Now we can get the value for every element because we have mapping from a set of keys to a set of values, the keys of this example is :  PlayerID ,PlayerName,PlayerPPD, this created from the original object person.    Ultimately,this Web method return the values as string, but the main idea of this method to show you how to deal with array of object and convert it to  Dictionary<string, object> object , and get the values of this Dictionary.   Hope this helps,

    Read the article

  • WebCenter Customer Spotlight: Hitachi Data Systems

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Watch this Webcast to see a live demo on how HDS creates multilingual content for their 35+ regional websites  Solution SummaryHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. HDS is based in Santa Clara, California, and has over 5,300 employees in more then 100 countries and regions. HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. HDS implemented a global web content management system based on Oracle WebCenter Content and integrated the Lingotek translation management system to manage their multilingual content. The implemented solution provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Company OverviewHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. The company sells through direct and indirect channels in more than 170 countries and regions. Its customers include of 50 percent of the Fortune 100 companies. HDS is based in Santa Clara California and has over 5,300 employees in more than 100 countries and regions. Business ChallengesHDS has over 35 global websites and the lack of global web capabilities led to inconsistency of messaging, slower time to market and failed to address local language needs. There was an extensive operational overhead due to manual and redundant processes. Translation efforts where superficial, inconsistent and wasteful and the lack of translation automation tools discouraged localization.  HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. Solution DeployedHDS implemented a global web content management system based on Oracle WebCenter Content. The solution supports decentralized publishing for their 35+ global sites to address local market needs while ensuring editorial and brand review trough embedded review processes. They integrated the Lingotek translation management system into Oracle WebCenter Content to manage their multilingual content. Business Results Provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Enables end-to-end content lifecycle management across multiple languages Leverage translation memory for reuse and consistency Reduce time to market with central repository of translated content Additional Information HDS Webcast Oracle WebCenter Content Lingotek website

    Read the article

  • Can't install Nuget or other extension to VS2012 on Win8

    - by VinnyG
    When I try to install any extension for visual studio ultimate 2012 on my new installation of Winodws 8 I get this exception : System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) at Microsoft.VisualStudio.Settings.ExternalSettingsManager.GetScopePaths(String applicationPath, String suffixOrName, String vsVersion, Boolean isLogged, Boolean isForIsolatedApplication) at Microsoft.VisualStudio.Settings.ExternalSettingsManager.CreateForApplication(String applicationPath) at VSIXInstaller.App.GetExtensionManager(SupportedVSSKU sku) at VSIXInstaller.App.GetExtensionManagerForApplicableSKU(SupportedVSSKU supportedSKU, IInstallableExtension installableExtension, List`1 applicableSKUs) at VSIXInstaller.App.InitializeInstall() at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.Execute() I tryed to repair VS, did not work, and also try to uninstall/install and got the same problem. Anybody as an idea?

    Read the article

  • Why ISO master (editor) does not read Windows images

    - by Jacek Blocki
    I have the followjng problem with ISO master software: I try to edit WIndows 7 ISO image $ isomaster windows7.iso The file does open, unfortunately all I get is README with message: This disc contains a "UDF" file system and requires an operating system that supports the ISO-13346 "UDF" file system specification. isomaster comes form Ubuntu repository, I am using 12.04. The system has kernel support for UDF installed, I can mount above ISO (mount -o loop) and see its content read only. Any idea how to fix it? Using other than isomaster tool is also an option. Regards, Jacek

    Read the article

  • Staggered Isometric Map: Calculate map coordinates for point on screen

    - by Chris
    I know there are already a lot of resources about this, but I haven't found one that matches my coordinate system and I'm having massive trouble adjusting any of those solutions to my needs. What I learned is that the best way to do this is to use a transformation matrix. Implementing that is no problem, but I don't know in which way I have to transform the coordinate space. Here's an image that shows my coordinate system: How do I transform a point on screen to this coordinate system?

    Read the article

  • Task Manager does not show memory usage

    - by Robin
    I just noticed this yesterday. I selected different memory columns, none of them worked, and I've tried showing processes from all users. I'm using Win 7. It doesn't slow down my computer or does anything else. I just want to know why and how to fix it. Could anyone help me on this? Thank you cannot post pix :( it is like this: only shows K, without actual number Image Name--------User Name----CPU----Memory (Private Working Set)------Description System -----------SYSTEM ------01-------------------------------K-------NT Kernel &system Smss.exe--------- SYSTEM -----00-------------------------------K-------Win Session Manager Wininit.exe------ SYSTEM ------00-------------------------------K-------Win Start-up Applic It's pretty much the same as http://www.sevenforums.com/general-discussion/56891-my-task-manager-doesnt-show-ram-usage-each-program.html that is the only one i found on google.

    Read the article

  • RPG Monster-Area, Spawn, Loot table Design

    - by daemonfire300
    I currently struggle with creating the database structure for my RPG. I got so far: tables: area (id) monster (id, area.id, monster.id, hp, attack, defense, name) item (id, some other values) loot (id = monster.id, item = item.id, chance) spawn (id = area.id, monster = monster.id, count) It is a browser-based game like e.g. Castle Age. The player can move from area to area. If a player enters an area the system spawns, based on the area.id and using the spawn table data, new monsters into the monster table. If a player kills a monster, the system picks the monster.id looks up the items via the the loot table and adds those items to the player's inventory. First, is this smart? Second, I need some kind of "monster_instance"-table and "area_instance"-table, since each player enters his very own "area" and does damage to his very own "monsters". Another approach would be adding the / a player.id to the monster table, so each monster spawned, has it's own "player", but I still need to assign them to an area, and I think this would overload the monster table if I put in the player.id and the area.id into the monster table. What are your thoughts? Temporary Solution monster (id, attackDamage, defense, hp, exp, etc.) monster_instance (id, player.id, area_instance.id, hp, attackDamage, defense, monster.id, etc.) area (id, name, area.id access, restriction) area_instance (id, area.id, last_visited) spawn (id, area.id, monster.id) loot (id, monster.id, chance, amount, ?area.id?) An example system-flow would be: Player enters area 1: system creates area_instance of type area.id = 1 and sets player.location to area.id = 1 If Player wants to battle monsters in the current area: system fetches all spawn entries matching area.id == player.location and creates a new monster_instance for each spawn by fetching the according monster-base data from table monster. If a monster is fetched more than once it may be cached. If Player actually attacks a monster: system updates the according monster_instance, if monster dies the instance if removed after creating the loot If Player leaves the area: area_instance.last_visited is set to NOW(), if player doesn't return to data area within a certain amount of time area_instance including all its monster_instances are deleted.

    Read the article

  • EPM Architecture: Reporting and Analysis

    - by Marc Schumacher
    Reporting and Analysis is the basis for all Oracle EPM reporting components. Through the Java based Reporting and Analysis web application deployed on WebLogic, it enables users to browse through reports for all kind of Oracle EPM reporting components. Typical users access the web application by browser through Oracle HTTP Server (OHS). Reporting and Analysis Web application talks to the Reporting and Analysis Agent using CORBA protocol on various ports. All communication to the repository databases (EPM System Registry and Reporting and Analysis database) from web and application layer is done using JDBC. As an additional data store, the Reporting and Analysis Agent uses the file system to lay down individual reports. While the reporting artifacts are stored on the file system, the folder structure and report based security information is stored in the relational database. The file system can be either local or remote (e.g. network share, network file system). If an external user directory is used, Reporting and Analysis services also communicate to this directory. The next post will cover WebAnalysis.

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • Grub failure after cat removed USB drive

    - by user76270
    I had a perfectly working Ubuntu Server system with U12.4 and Server installed and was setting up the mail system. When I was absent from my computer room my cat (furry type) apparently got to play with a USB dongle on an extension cable and removed it. The system went berserk and now will not boot. I ran Ubuntu 12.4 on the install disk and the directory structure and every thing appears to still be on the drives. How do I recover the GRUB boot section without the system repartitioning everything and wiping everything else? Please I am a total newb when it comes to Linux errors and fixing them.

    Read the article

  • How to do end task similar to that in Windows?

    - by Rohit Bansal
    Sometimes Ubuntu freezes and I have no other option than to power off system directly. Is there some remedy or way like 'Ctrl + Alt + Del' similar to that in Windows. That would be very helpful..... I need a way out other than option to directly power off my system in times of gross failure. Shutting down this way always creates fear crashing down my system or losing some data which is unacceptable.

    Read the article

  • Updating a Progress Bar within the Status Bar [migrated]

    - by Muhnamana
    I'm trying to figure out how to incorporate a progress bar within the status bar to show how much processing is completed. Below is my example of updating the progress bar (not sure if this is the correct way or not) Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick ToolStripProgressBar1.Value = ToolStripProgressBar1.Value + 2 If ToolStripProgressBar1.Value = 100 Then ToolStripProgressBar1.Value = 0 ToolStripProgressBar1.Value = ToolStripProgressBar1.Value + 2 Timer1.Enabled = True End If End Sub Here is the code within the button. Private Sub Button1_Click(sender As System.Object, e As System.EventArgs) Handles Button1Run.Click ToolStripStatusLabel1.Text = "Processing..." Timer1.Enabled = True 'more code to be inserted here End Sub What I'm not sure is how to update the progress bar based on the amount of code you have and once the processing is complete, update the ToolStripStatusLabel1 to show "Processing...Complete!".

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • Will these type of 403 errors affect my ranking?

    - by Gkhan14
    Let's say I have a directory that has a 403 forbidden error for all of the content in it, however a few of the images in the subdirectoies of the main diretory do NOT have a 403 forbidden error. Will this fact affect my ranking? For example: test.com/system/ (HAS 403 ERROR FOR ALL FILES) - test.com/system/pie/ (HAS 403 ERROR FOR ALL FILES) - test.com/system/pie/image.png (DOES NOT HAVE A 403 ERROR, AND THIS IMAGE IS EMBEDED ON A PAGE ON test.com e.g(test.com/pie/)) This sort of pattern repeats for about 10 different images. This directory is like a secret "system", however all of the content on the main site (test.com) is still accessible to everyone from the public.

    Read the article

  • C#/.NET Little Wonders: Interlocked Read() and Exchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Last time we discussed the Interlocked class and its Add(), Increment(), and Decrement() methods which are all useful for updating a value atomically by adding (or subtracting).  However, this begs the question of how do we set and read those values atomically as well? Read() – Read a value atomically Let’s begin by examining the following code: 1: public class Incrementor 2: { 3: private long _value = 0; 4:  5: public long Value { get { return _value; } } 6:  7: public void Increment() 8: { 9: Interlocked.Increment(ref _value); 10: } 11: } 12:  It uses an interlocked increment, as we discuss in my previous post (here), so we know that the increment will be thread-safe.  But, to realize what’s potentially wrong we have to know a bit about how atomic reads are in 32 bit and 64 bit .NET environments. When you are dealing with an item smaller or equal to the system word size (such as an int on a 32 bit system or a long on a 64 bit system) then the read is generally atomic, because it can grab all of the bits needed at once.  However, when dealing with something larger than the system word size (reading a long on a 32 bit system for example), it cannot grab the whole value at once, which can lead to some problems since this read isn’t atomic. For example, this means that on a 32 bit system we may read one half of the long before another thread increments the value, and the other half of it after the increment.  To protect us from reading an invalid value in this manner, we can do an Interlocked.Read() to force the read to be atomic (of course, you’d want to make sure any writes or increments are atomic also): 1: public class Incrementor 2: { 3: private long _value = 0; 4:  5: public long Value 6: { 7: get { return Interlocked.Read(ref _value); } 8: } 9:  10: public void Increment() 11: { 12: Interlocked.Increment(ref _value); 13: } 14: } Now we are guaranteed that we will read the 64 bit value atomically on a 32 bit system, thus ensuring our thread safety (assuming all other reads, writes, increments, etc. are likewise protected).  Note that as stated before, and according to the MSDN (here), it isn’t strictly necessary to use Interlocked.Read() for reading 64 bit values on 64 bit systems, but for those still working in 32 bit environments, it comes in handy when dealing with long atomically. Exchange() – Exchanges two values atomically Exchange() lets us store a new value in the given location (the ref parameter) and return the old value as a result. So just as Read() allows us to read atomically, one use of Exchange() is to write values atomically.  For example, if we wanted to add a Reset() method to our Incrementor, we could do something like this: 1: public void Reset() 2: { 3: _value = 0; 4: } But the assignment wouldn’t be atomic on 32 bit systems, since the word size is 32 bits and the variable is a long (64 bits).  Thus our assignment could have only set half the value when a threaded read or increment happens, which would put us in a bad state. So instead, we could write Reset() like this: 1: public void Reset() 2: { 3: Interlocked.Exchange(ref _value, 0); 4: } And we’d be safe again on a 32 bit system. But this isn’t the only reason Exchange() is valuable.  The key comes in realizing that Exchange() doesn’t just set a new value, it returns the old as well in an atomic step.  Hence the name “exchange”: you are swapping the value to set with the stored value. So why would we want to do this?  Well, anytime you want to set a value and take action based on the previous value.  An example of this might be a scheme where you have several tasks, and during every so often, each of the tasks may nominate themselves to do some administrative chore.  Perhaps you don’t want to make this thread dedicated for whatever reason, but want to be robust enough to let any of the threads that isn’t currently occupied nominate itself for the job.  An easy and lightweight way to do this would be to have a long representing whether someone has acquired the “election” or not.  So a 0 would indicate no one has been elected and 1 would indicate someone has been elected. We could then base our nomination strategy as follows: every so often, a thread will attempt an Interlocked.Exchange() on the long and with a value of 1.  The first thread to do so will set it to a 1 and return back the old value of 0.  We can use this to show that they were the first to nominate and be chosen are thus “in charge”.  Anyone who nominates after that will attempt the same Exchange() but will get back a value of 1, which indicates that someone already had set it to a 1 before them, thus they are not elected. Then, the only other step we need take is to remember to release the election flag once the elected thread accomplishes its task, which we’d do by setting the value back to 0.  In this way, the next thread to nominate with Exchange() will get back the 0 letting them know they are the new elected nominee. Such code might look like this: 1: public class Nominator 2: { 3: private long _nomination = 0; 4: public bool Elect() 5: { 6: return Interlocked.Exchange(ref _nomination, 1) == 0; 7: } 8: public bool Release() 9: { 10: return Interlocked.Exchange(ref _nomination, 0) == 1; 11: } 12: } There’s many ways to do this, of course, but you get the idea.  Running 5 threads doing some “sleep” work might look like this: 1: var nominator = new Nominator(); 2: var random = new Random(); 3: Parallel.For(0, 5, i => 4: { 5:  6: for (int j = 0; j < _iterations; ++j) 7: { 8: if (nominator.Elect()) 9: { 10: // elected 11: Console.WriteLine("Elected nominee " + i); 12: Thread.Sleep(random.Next(100, 5000)); 13: nominator.Release(); 14: } 15: else 16: { 17: // not elected 18: Console.WriteLine("Did not elect nominee " + i); 19: } 20: // sleep before check again 21: Thread.Sleep(1000); 22: } 23: }); And would spit out results like: 1: Elected nominee 0 2: Did not elect nominee 2 3: Did not elect nominee 1 4: Did not elect nominee 4 5: Did not elect nominee 3 6: Did not elect nominee 3 7: Did not elect nominee 1 8: Did not elect nominee 2 9: Did not elect nominee 4 10: Elected nominee 3 11: Did not elect nominee 2 12: Did not elect nominee 1 13: Did not elect nominee 4 14: Elected nominee 0 15: Did not elect nominee 2 16: Did not elect nominee 4 17: ... Another nice thing about the Interlocked.Exchange() is it can be used to thread-safely set pretty much anything 64 bits or less in size including references, pointers (in unsafe mode), floats, doubles, etc.  Summary So, now we’ve seen two more things we can do with Interlocked: reading and exchanging a value atomically.  Read() and Exchange() are especially valuable for reading/writing 64 bit values atomically in a 32 bit system.  Exchange() has value even beyond simply atomic writes by using the Exchange() to your advantage, since it reads and set the value atomically, which allows you to do lightweight nomination systems. There’s still a few more goodies in the Interlocked class which we’ll explore next time! Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked

    Read the article

  • Frequent GUI pauses in Ubuntu 13.04 / Unity / Intel HD4000

    - by Simon
    I'm experiencing very frequent (and regular) GUI pauses on my system. Every 30 seconds (pretty much exactly) the GUI will freeze for maybe .25 to .5 seconds. The mouse stops moving, keys stop echoing and a stopwatch timer briefly pauses. I'm using the Intel Graphics driver available from: https://download.01.org/gfx/ubuntu/13.04/main I've looked in a few places and tried a few things for a solution: I've checked cron and anacron for scheduled processes. I've disabled background processes (eg mysql, postgres, apache) not that these were doing anything anyway I've checked the following posts and tried the suggestions there: Unity GUI pauses/freezes for less than a few seconds How to go about troubleshooting frequent system pauses I've watched the system using top and System Monitor and there are no spikes (or even blips) of cpu usage when the pauses occur. There are no obvious error messages in dmesg or syslog There is loads of free RAM (8GB+) and no swap usage If it helps it's a ZooStorm i5 laptop with a HD4000 GPU, 16GB Ram and an SSD. Any help / suggestions would be very gratefully received.

    Read the article

  • Take care to unhook Anonymous Delegates

    - by David Vallens
    Anonymous delegates are great, they elimiante the need for lots of small classes that just pass values around, however care needs to be taken when using them, as they are not automatically unhooked when the function you created them in returns. In fact after it returns there is no way to unhook them. Consider the following code.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { SimpleEventSource t = new SimpleEventSource(); t.FireEvent(); FunctionWithAnonymousDelegate(t); t.FireEvent(); } private static void FunctionWithAnonymousDelegate(SimpleEventSource t) { t.MyEvent += delegate(object sender, EventArgs args) { Debug.WriteLine("Anonymous delegate called"); }; t.FireEvent(); } } public class SimpleEventSource { public event EventHandler MyEvent; public void FireEvent() { if (MyEvent == null) { Debug.WriteLine("Attempting to fire event - but no ones listening"); } else { Debug.WriteLine("Firing event"); MyEvent(this, EventArgs.Empty); } } } } If you expected the anonymous delegates do die with the function that created it then you would expect the output Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledAttempting to fire event - but no ones listening However what you actually get is Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledFiring eventAnonymous delegate called In my example the issue is just slowing things down, but if your delegate modifies objects, then you could end up with dificult to diagnose bugs. A solution to this problem is to unhook the delegate within the function var myDelegate = delegate(){Console.WriteLine("I did it!");}; MyEvent += myDelegate; // .... later MyEvent -= myDelegate;

    Read the article

  • Project Management Tool for developers and sysadmins: shared or separate?

    - by David
    Should a team of system administrators who are on a software development project share a project management tool with the developers or use their own separate one? We use Trac and I see the benefit in sharing since inter-team tasks can be maintained by a single system where there may be cross-over or misfiled bugs (e.g. an apparent bug which turns out to be a server configuration issue or a development cycle which needs a server to be configured before it can start) However sharing could be difficult since many system administration tasks don't coincide with a single development milestone if at all. So should a system administration team use a separate PM Tool or share the same one with the developers? If they should share, then how?

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >