Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 642/842 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • Using extension methods to decrease the surface area of a C# interface

    - by brian_ritchie
    An interface defines a contract to be implemented by one or more classes.  One of the keys to a well-designed interface is defining a very specific range of functionality. The profile of the interface should be limited to a single purpose & should have the minimum methods required to implement this functionality.  Keeping the interface tight will keep those implementing the interface from getting lazy & not implementing it properly.  I've seen too many overly broad interfaces that aren't fully implemented by developers.  Instead, they just throw a NotImplementedException for the method they didn't implement. One way to help with this issue, is by using extension methods to move overloaded method definitions outside of the interface. Consider the following example: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public interface IFileTransfer 2: { 3: void SendFile(Stream stream, Uri destination); 4: } 5:   6: public static class IFileTransferExtension 7: { 8: public static void SendFile(this IFileTransfer transfer, 9: string Filename, Uri destination) 10: { 11: using (var fs = File.OpenRead(Filename)) 12: { 13: transfer.SendFile(fs, destination); 14: } 15: } 16: } 17:   18: public static class TestIFileTransfer 19: { 20: static void Main() 21: { 22: IFileTransfer transfer = new FTPFileTransfer("user", "pass"); 23: transfer.SendFile(filename, new Uri("ftp://ftp.test.com")); 24: } 25: } In this example, you may have a number of overloads that uses different mechanisms for specifying the source file. The great part is, you don't need to implement these methods on each of your derived classes.  This gives you a better interface and better code reuse.

    Read the article

  • ODEE Green Field (Windows) Part 2 - WebLogic

    - by AndyL-Oracle
    Welcome back to the next installment on how to install Oracle Documaker Enterprise Edition onto a green field environment! In my previous post, I went over some basic introductory information and we installed the Oracle database. Hopefully you've completed that step successfully, and we're ready to move on - so let's dive in! For this installment, we'll be installing WebLogic 10.3.6, which is a prerequisite for ODEE 12.3 and 12.2. Prior to installing the WebLogic application server, verify that you have met the software prerequisites. Review the documentation – specifically you need to make sure that you have the appropriate JDK installed. There are advisories if you are using JDK 1.7. These instructions assume you are using JDK 1.6, which is available here. The first order of business is to unzip the installation package into a directory location. This procedure should create a single file, wls1036_generic.jar. Navigate to and execute this file by double-clicking it. This should launch the installer. Depending on your User Account Control rights you may need to approve running the setup application. Once the installer application opens, click Next. Select your Middleware Home. This should be within your ORACLE_HOME. The default is probably fine. Click Next. Uncheck the Email option. Click Yes. Click Next. Click Yes Click Yes and Yes again (yes, it’s quite circular). Check that you wish to remain uninformed and click Continue. Click Custom and Next. Uncheck Evaluation Database and Coherence, then click Next. Select the appropriate JDK. This should be a 64-bit JDK if you’re running a 64-bit OS. You may need to browse to locate the appropriate JAVA_HOME location. Check the JDK and click Next. Verify the installation directory and click Next. Click Next. Allow the installation to progress… Uncheck Run Quickstart and click Done.  And that's it! It's all quite painless - so let's proceed on to set up SOA Suite, shall we? 

    Read the article

  • IPv6, isn't it just a few extra bits?

    - by rclewis
    It's always an interesting task, to try and explain what you do to family and friends. I have described IPv6 as the "Next Generation Internet"  or "Second Internet" but the hollow expressions on my kids faces scream for the instant relief of the latest video game.  Never one to give up easily, I have formulated a new example - the Post Office... Similar to the Post Office the Internet delivers mail and packages based on addresses. As the number of residences, businesses, and delivery locations increased, the 5 digit ZIP Code (Washington, DC 20005) was expanded to ZIP+4  allowing for more precise delivery points (Postmaster General, Washington, DC 20260-3100). Ah, if only computers were as simple.  IPv6 isn't an add-on or expansion of the existing IPv4 Addressing, it is a new addressing model which will allow the internet to grow from a single computer in the basement of a university or your parents kitchen table, to support the multitude of smart phones, smart TV's, tablets, dvr's, and disk players, all clambering to connect for information. Unfortunetly there are only a finite number of IPv4 public addresses left, and those are being consumed at an ever increasing rate. Few people could have predicted the explosive growth of the internet or the shortage of IPv4 addresses we now face - but there is a "Plan B" and that is the vastly larger address space of IPv6.  Many in the industry have labeled this a "business continuity" problem,  when in fact most companies will be able to continue conducting business once they run out of existing IPv4 Addresses. The problem is really a Customer Continuity problem, how will businesses communicate with existing customers and reach new customers online who's only option is to adopt IPv6 when IPv4 is depleted? Perhaps a first step is publishing a blog that is also accessible via IPv6, it's just a few extra bits. Join us for the Oracle OpenWorld 2012 Session:   Navigating IPv6 @ Oracle Thursday, Oct. 4th 2:15PM - 3:15PM  Palace Hotel - Concert   Learn more about IPv6 Technologies at Oracle

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • Translating multiple objects in GUI based on average position?

    - by user1423893
    I use this method to move a single object in 3D space, it accounts for a local offset based on where the cursor ray hits the widget and the center of the widget. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; objectSelected.Position = p; I would like to move multiple objects based on the same principle and using a widget which is located at the average position of all the objects currently selected. I thought that I would have to translate each object based on their offset from the average point and then include the local offset. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; int numSelectedObjects = objectSelectedList.Count; for (int i = 0; i < numSelectedObjects; ++i) { objectSelectedList[i].Position = (objectSelectedList[i].Position - translationWidget.Position) + p; } This doesn't work as the object starts shaking, which I think is because I haven't accounted for the new offset correctly. Where have I gone wrong?

    Read the article

  • Efficient way to find unique elements in a vector compared against multiple vectors

    - by SyncMaster
    I am trying find the number of unique elements in a vector compared against multiple vectors using C++. Suppose I have, v1: 5, 8, 13, 16, 20 v2: 2, 4, 6, 8 v3: 20 v4: 1, 2, 3, 4, 5, 6, 7 v5: 1, 3, 5, 7, 11, 13, 15 The number of unique elements in v1 is 1 (i.e. number 16). I tried two approaches. Added vectors v2,v3,v4 and v5 into a vector of vector. For each element in v1, checked if the element is present in any of the other vectors. Combined all the vectors v2,v3,v4 and v5 using merge sort into a single vector and compared it against v1 to find the unique elements. Note: sample_vector = v1 and all_vectors_merged contains v2,v3,v4,v5 //Method 1 unsigned int compute_unique_elements_1(vector<unsigned int> sample_vector,vector<vector<unsigned int> > all_vectors_merged) { unsigned int duplicate = 0; for (unsigned int i = 0; i < sample_vector.size(); i++) { for (unsigned int j = 0; j < all_vectors_merged.size(); j++) { if (std::find(all_vectors_merged.at(j).begin(), all_vectors_merged.at(j).end(), sample_vector.at(i)) != all_vectors_merged.at(j).end()) { duplicate++; } } } return sample_vector.size()-duplicate; } // Method 2 unsigned int compute_unique_elements_2(vector<unsigned int> sample_vector, vector<unsigned int> all_vectors_merged) { unsigned int unique = 0; unsigned int i = 0, j = 0; while (i < sample_vector.size() && j < all_vectors_merged.size()) { if (sample_vector.at(i) > all_vectors_merged.at(j)) { j++; } else if (sample_vector.at(i) < all_vectors_merged.at(j)) { i++; unique ++; } else { i++; j++; } } if (i < sample_vector.size()) { unique += sample_vector.size() - i; } return unique; } Of these two techniques, I see that Method 2 gives faster results. 1) Method 1: Is there a more efficient way to find the elements than running std::find on all the vectors for all the elements in v1. 2) Method 2: Extra overhead in comparing vectors v2,v3,v4,v5 and sorting them. How can I do this in a better way?

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • Series On Embedded Development (Part 2) - Build-Time Optionality

    - by user12612705
    In this entry on embedded development, I'm going to discuss build-time optionality (BTO). BTO is the ability to subset your software at build-time so you only use what is needed. BTO typically pertains more to software providers rather then developers of final products. For example, software providers ship source products, frameworks or platforms which are used by developers to build other products. If you provide a source product, you probably don't have to do anything to support BTO as the developers using your source will only use the source they need to build their product. If you provide a framework, then there are some things you can do to support BTO. Say you provide a Java framework which supports audio and video. If you provide this framework in a single JAR, then developers who only want audio are forced to ship their product with the video portion of your framework even though they aren't using it. In this case, support providing the framework in separate JARs...break the framework into an audio JAR and a video JAR and let the users of your framework decide which JARs to include in their product. Sometimes this is as simple as packaging, but if, for example, the video functionality is dependent on the audio functionality, it may require coding work to cleanly separate the two. BTO can also work at install-time, and this is sometimes overlooked. Let's say your building a phone application which can use Near Field Communications (NFC) if it's available on the phone, but it doesn't require NFC to work. Typically you'd write one app for all phones (saving you time)...both those that have NFC and those that don't, and just use NFC if it's there. However, for better efficiency, you can detect at install-time if the phone supports NFC and not install the NFC portion of your app if the phone doesn't support NFC. This requires that you write the app so it can run without the optional NFC code and that you write your install app so it can detect NFC and do the right thing at install-time. Supporting install-time optionality will save persistent footprint on the phone, something your customers will appreciate, your app "neighbors" will appreciate, and that you'll appreciate when they save static footprint for you. In the next article, I'll talk about runtime optionality.

    Read the article

  • Should we persist with an employee still writing bad code after many years?

    - by user94986
    I've been assigned the task of managing developers for a well-established company. They have a single developer who specialises in all their C++ coding (since forever), but the quality of the work is abysmal. Code reviews and testing have revealed many problems, one of the worst being memory leaks. The developer has never tested his code for leaks, and I discovered that the applications could leak many MBs with only a minute of use. User's were reporting huge slowdowns, and his take was, "it's nothing to do with me - if they quit and restart, it's all good again." I've given him tools to detect and trace the leaks, and sat down with him for many hours to demonstrate how the tools are used, where the problems occur, and what to do to fix them. We're 6 months down the track, and I assigned him to write a new module. I reviewed it before it was integrated into our larger code base, and was dismayed to discover the same bad coding as before. The part that I find incomprehensible is that some of the coding is worse than amateurish. For example, he wanted a class (Foo) that could populate an object of another class (Bar). He decided that Foo would hold a reference to Bar, e.g.: class Foo { public: Foo(Bar& bar) : m_bar(bar) {} private: Bar& m_bar; }; But (for other reasons) he also needed a default constructor for Foo and, rather than question his initial design, he wrote this gem: Foo::Foo() : m_bar(*(new Bar)) {} So every time the default constructor is called, a Bar is leaked. To make matters worse, Foo allocates memory from the heap for 2 other objects, but he didn't write a destructor or copy constructor. So every allocation of Foo actually leaks 3 different objects, and you can imagine what happened when a Foo was copied. And - it only gets better - he repeated the same pattern on three other classes, so it isn't a one-off slip. The whole concept is wrong on so many levels. I would feel more understanding if this came from a total novice. But this guy has been doing this for many years and has had very focussed training and advice over the past few months. I realise he has been working without mentoring or peer reviews most of that time, but I'm beginning to feel he can't change. So my question is, would you persist with someone who is writing such obviously bad code?

    Read the article

  • How should I refactor switch statements like this (Switching on type) to be more OO?

    - by Taytay
    I'm seeing some code like this in our code base, and want to refactor it: (Typescript psuedocode follows): class EntityManager{ private findEntityForServerObject(entityType:string, serverObject:any):IEntity { var existingEntity:IEntity = null; switch(entityType) { case Types.UserSetting: existingEntity = this.getUserSettingByUserIdAndSettingName(serverObject.user_id, serverObject.setting_name); break; case Types.Bar: existingEntity = this.getBarByUserIdAndId(serverObject.user_id, serverObject.id); break; //Lots more case statements here... } return existingEntity; } } The downsides of switching on type are self-explanatory. Normally, when switching behavior based on type, I try to push the behavior into subclasses so that I can reduce this to a single method call, and let polymorphism take care of the rest. However, the following two things are giving me pause: 1) I don't want to couple the serverObject with the class that is storing all of these objects. It doesn't know where to look for entities of a certain type. And unfortunately, the identity of a type of ServerObject varies with the type of ServerObject. (So sometimes it's just an ID, other times it's a combination of an id and a uniquely identifying string, etc). And this behavior doesn't belong down there on those subclasses. It is the responsibility of the EntityManager and its delegates. 2) In this case, I can't modify the ServerObject classes since they're plain old data objects. It should be mentioned that I've got other instances of the above method that take a parameter like "IEntity" and proceed to do almost the same thing (but slightly modify the name of the methods they're calling to get the identity of the entity). So, we might have: case Types.Bar: existingEntity = this.getBarByUserIdAndId(entity.getUserId(), entity.getId()); break; So in that case, I can change the entity interface and subclasses, but this isn't behavior that belongs in that class. So, I think that points me to some sort of map. So eventually I will call: private findEntityForServerObject(entityType:string, serverObject:any):IEntity { return aMapOfSomeSort[entityType].findByServerObject(serverObject); } private findEntityForEntity(someEntity:IEntity):IEntity { return aMapOfSomeSort[someEntity.entityType].findByEntity(someEntity); } Which means I need to register some sort of strategy classes/functions at runtime with this map. And again, I darn well better remember to register one for each my my types, or I'll get a runtime exception. Is there a better way to refactor this? I feel like I'm missing something really obvious here.

    Read the article

  • Reasons NOT to use JSF [closed]

    - by Vain Fellowman
    I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web-development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front-controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?

    Read the article

  • Mount CIFS share gives "mount error 127 = Key has expired"

    - by djk
    Hi, I'm currently replicating the setup of a CentOS box an am running into a strange error while trying to mount a samba share that resides on a NAS. The error I'm getting is: mount error 127 = Key has expired Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) The settings are identical to the old machine, the password is definitely correct as well. I have googled the issue of course, and looked at every single page that references this issue (not that many) and have still not found an answer. The older CentOS box is using version 3.0.28-0.el4.9 of Samba and the one I'm trying to setup now is 3.0.33-3.7.el5_3.1. I don't know if this has anything to do with it but it's certainly one of the only differences between the 2 setups. When I try the mount command this appears in the syslog: Sep 8 10:51:54 helvetica2 kernel: Status code returned 0xc0000072 NT_STATUS_ACCOUNT_DISABLED Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: Send error in SessSetup = -127 Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: cifs_mount failed w/return code = -127 The account is very much not disabled as it works on the old box using the same credentials. Has anyone else seen this problem? Thanks!

    Read the article

  • Rebuilding array on 3ware 9690SA-8I

    - by Tim Jones
    I have a RAID10 (8x1TB) array on a 3ware 9690 card running on an Ubuntu 1110 server. There was a kernel update so I scheduled a reboot after which the array was inaccessible. I checked the status a drive has died in the array, but the controller has thrown the entire array into an 'inoperable' state instead of simply degraded (what's the point of the RAID now ;-). After taking out the 'dead' drive I run a quick test to find it completely functional without a bad sector to be found. I try to put the drive back in but the array still marks the disk as degraded (remembering serial number or something??) and the entire array as inoperable... So I swap it out for a known working drive (not the same capacity but higher - should still work) and initiate a rebuild with the the new drive as a replacement. This fails instantly with the error "(0x0B:0x0033): Unit busy : Failed to start Rebuild on Unit 0". The unit shouldn't be busy as it is not mounted (the card itself is listed with lshw but the array it provides is not). I'm pretty much at an impasse now, I don't understand how I can have a single drive failure on a RAID10 that makes the entire array inaccessible, degraded I could understand but inaccessible?? I don't think the controller as prior to the reboot it was completely functional.

    Read the article

  • Watchguard Firewall WebBlocker Regular Expression for Multiple Domains?

    - by Eric
    I'm pretty sure this is really a regex question, so you can skip to REGEX QUESTION if you want to skip the background. Our primary firewall is a Watchguard X750e running Fireware XTM v11.2. We're using webblocker to block most of the categories, and I'm allowing needed sites as they arise. Some sites are simple to add as exceptions, like Pandora radio. That one is just a pattern matched exception with "padnora.com/" as the pattern. All traffic from anywhere on pandora.com is allowed. I'm running into trouble on more sophisticated domains that reference content off of their base domains. We'll take GrooveShark as a sample. If you go to http://grooveshark.com/ and view page source, you'll see hrefs referring to gs-cdn.net as well as grooveshar.com. So adding a WebBlocker exception to grooveshark.com/ is not effective, and I have to add a second rule allowing gs-cdn.net/ as well. I see that the WebBlocker allows regex rules, so what I'd like to do in situations like this is create a single regex rule that allows traffic to all the needed domains. REGEX QUESTION: I'd like to try a regex that matches grooveshark.com/ and gs-cdn.net/. If anybody can help me write that regex, I'd appreciate it. Here is what is in the WatchGuard documentation from that section: Regular expression Regular expression matches use a Perl-compatible regular expression to make a match. For example, .[onc][eor][gtm] matches .org, .net, .com, or any other three-letter combination of one letter from each bracket, in order. Be sure to drop the leading “http://” Supports wild cards used in shell script. For example, the expression “(www)?.watchguard.[com|org|net]” will match URL paths including www.watchguard.com, www.watchguard.net, and www.watchguard.org. Thanks all!

    Read the article

  • Mount CIFS share gives "mount error 127 = Key has expired"

    - by djk
    I'm currently replicating the setup of a CentOS box an am running into a strange error while trying to mount a samba share that resides on a NAS. The error I'm getting is: mount error 127 = Key has expired Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) The settings are identical to the old machine, the password is definitely correct as well. I have googled the issue of course, and looked at every single page that references this issue (not that many) and have still not found an answer. The older CentOS box is using version 3.0.28-0.el4.9 of Samba and the one I'm trying to setup now is 3.0.33-3.7.el5_3.1. I don't know if this has anything to do with it but it's certainly one of the only differences between the 2 setups. When I try the mount command this appears in the syslog: Sep 8 10:51:54 helvetica2 kernel: Status code returned 0xc0000072 NT_STATUS_ACCOUNT_DISABLED Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: Send error in SessSetup = -127 Sep 8 10:51:54 helvetica2 kernel: CIFS VFS: cifs_mount failed w/return code = -127 The account is very much not disabled as it works on the old box using the same credentials. Has anyone else seen this problem? Thanks!

    Read the article

  • How to install OS X *.TTC font on Windows? Error: "*.TTC does not appear to be a valid font"

    - by Chris W. Rea
    I own both a Mac running OS X 10.6 Snow Leopard, and a PC running Windows 7. On my Mac is a font called "AmericanTypewriter.ttc". I'd like to use that font on my PC, for a specific creative project. I was able to copy the actual font file over to the PC, but when I try to install it into the Windows Fonts folder I get the following error message: "Cannot install (FONTNAME).ttc - The file '(FONTNAME).ttc' does not appear to be a valid font." Can *.TTC format font files be installed on Windows? If so, how? Thanks! UPDATE: I downloaded the source code for a simple ttc2ttf utility (ttc2ttf_AA.tar.gz) found at this Japanese page and compiled it under cygwin via g++. The resulting executable extracted a single file, "AmericanTypewriter.ttf", from the TTC / True Type Collection. (Why have a collection with only one file!?) However, I still get an error message similar to the one above when I try to install the resulting AmericanTypewriter.ttf onto Windows. I'm stumped again. p.s. I no longer need this font on Windows, but now I'm determined to figure out why & how-to :-)

    Read the article

  • LiteSpeed enable Access-Control-Allow-Origin (no response header on CORS request)

    - by Joe Coder Guy
    Seriously, I can't find a single page discussing this for litespeed. Using this format in the htaccess "Header set Access-Control-Allow-Origin http://aSite.com" (and https) sends the setting in the http response header, but I still get the "XMLHttpRequest cannot load https://aSite.com/aFile.php. Origin aSite.com is not allowed by Access-Control-Allow-Origin" error when trying to access https from http origin. Also, I receive no response header for https, only that message shows up in Chrome. Is the server still blocking it even though I've sent the proper headers? I read elsewhere that it helps to add these terms Access-Control-Allow-Headers X-Requested-With Access-Control-Allow-Methods OPTIONS, GET, POST Access-Control-Allow-Headers Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control but I don't see these in my headers. Using these, my PHP files aren't even reached (because they register no errors or anything), so it looks like it comes from the server only, but what do I know. Thanks in advance! Update Since no response header, Prashant seems to suggest it's a server issue in his error since it worked on another server. http://stackoverflow.com/questions/11953132/no-response-obtained-while-implementing-cors Anyone know how to flip this switch? Headers work now Bad litespeed format. Should look like this. Still being denied though. Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Methods OPTIONS Header set Access-Control-Allow-Methods GET Header set Access-Control-Allow-Methods POST Header set Access-Control-Allow-Headers Content-Type Header set Access-Control-Allow-Headers Depth Header set Access-Control-Allow-Headers User-Agent Header set Access-Control-Allow-Headers X-File-Size Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Headers If-Modified-Since Header set Access-Control-Allow-Headers X-File-Name Header set Access-Control-Allow-Headers Cache-Control

    Read the article

  • Windows web server and SQL Server on same dedicated server

    - by asinc
    I'm currently trying to decide on the best approach to handle hosting a few moderate traffic websites for production e-commerce and online applications. We'd like to move to a dedicated server and are looking at this as the most likely machine: Quad Core Intel Core2Quad Q9550 Processor, 2.83 Ghz X 4, 4 GB Kingston Ram This would run Windows Web Server 2008 R2 x64 and potentially also Sql Server Web 2008 and SmarterMail server. Given that we already pay for a high-end VPS for development, testing, shared version control we'd like to avoid going with two servers for production. We'd like to avoid using shared sql server hosting and have thought of using the development server as the database server as an option too - but potentially a security risk due to use for development by internal and contract users. The questions are: - Do you feel there would be performance degradation by running this on the same machine? - Are there significant issues to be concerned about if we do this? We understand that best practice would be to run separate db and app servers but the volume of traffic is currently not that high and adding another server just for database is currently too costly. - What are others doing out there? Alternatively, would you recommend instead going with two separate VPS servers with 2GB RAM each on Hyper-v which would be about the same cost as the single dedicated server above? Thanks!

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Domain Trust Issues When Setting Up TFS 2010 on Windows Server 2008 R2

    - by Chris Reynolds
    I am trying to setup Team Foundation Server 2010 on Windows Server 2008 R2 using a single server configuration. During the "Readiness Checks" phase of the configuration wizard, I am facing an issue that i preventing me from communicating with the domain controller (which is Windows Server 2000). [ System Checks ] TF255435: This computer is a member of an Active Directory domain, but the domain controllers are not accessible. Network problems might be preventing access to the domain. Verify that the network is operational, and then retry the readiness checks. Other options include configuring Team Foundation Server specifying a local account in the custom wizard or joining the computer to a workgroup. http://go.microsoft.com/fwlink/?LinkID=164053&clcid=0x409 After reading the log file, the main issue I am encountering appears to be: The trust relationship between this workstation and the primary domain failed. (type SystemException) I have read in several other locations that the solution to this issue is to: Leave the domain Restart Join a workgroup Restart Rejoin the domain Unfortunately, I have tried this several times now and the issue persists. Is there anything I can try on the either the client machine or the domain controller that may help solve my issue?

    Read the article

  • IIS 7.5 How to disable "Verify File Exists" for siteminder handler

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • System State Backups using NTbackup fail with error 0x800423f4 (relating to volume shadow copy)

    - by Paul Zimmerman
    We have a Windows Server 2003 R2 running Service Pack 2. It is a domain controller (Global Catalog) and our main internal DNS server. We run a System State backup of the machine to back up Active Directory information and save the backup to a different server. This server has a single drive (C:), and we do have Shadow Copies enabled for the volume (which are completing successfully). The System State Backup is now failing with the following listed in the backup logs: Volume shadow copy creation: Attempt 1. "Event Log Writer" has reported an error 0x800423f4. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f4 Aborting Backup. The operation did not successfully complete. When doing a vssadmin list writers, we sometimes get the following reported for the Event Log Writer (other times it says that it is in the state of "[1] Stable" with "No error"): Writer name: 'Event Log Writer' Writer Id: {eee8c692-67ed-4250-8d86-390603070d00} Writer Instance Id: {c7194e96-868a-49e5-ba99-89b61977753c} State: [8] Failed Last error: Retryable error We have tried disabling the event log service via the registry, rebooting, deleting the event log files from the drive, then re-enabling the service via the registry and rebooting, but this didn't seem to solve the issue. We also get an error message when in the event viewer when trying to open the log for the "File Replication Service" of "Unable to complete the operation on 'File Replication Service'. The security descriptor structure is invalid." I have searched the error via Google and tried a number of different things, but nothing has seemed to help. Any suggestions on what we might try to get the Event Log Writer to behave would be greatly appreciated!

    Read the article

  • Invalid format for New Relic licence when installing on Elastic Beanstalk

    - by BenFreke
    We've created an app that is running on an Elastic Beanstalk instance, 64 bit PHP version 5.4 (so not legacy). I've used the New Relic installation instructions to install New Relic, and viewing phpinfo shows that New Relic is installed. However, I'm not getting any data in New Relic and that is because it is saying that the licence is ***invalid format*** under newrelic.licence I'm getting the licence from my New Relic account, and it is a 40 character hexadecimal string. Here is the current newrelic.config file in the .ebextensions folder I'm using, with most of the licence key commented out. packages: yum: newrelic-php5: [] rpm: newrelic: http://yum.newrelic.com/pub/newrelic/el5/x86_64/newrelic-repo-5-3.noarch.rpm commands: configure_new_relic: command: newrelic-install install env: NR_INSTALL_SILENT: true NR_INSTALL_KEY: ec9a4... Skitch of relevant phpinfo Can anyone shed some light on what's going on here? I've tried two different New Relic licence keys with the same error, I've also surrounded it with a single quote mark and tried uppercase only. And at this point I'm out of ideas on what to try. We're not AWS gurus so it could very easily be something simple like not opening a port to allow the licence to be validated?

    Read the article

  • PDF Converter wanted: Convert 8.5*11 PDF images into 600*800px PDF images for the Nook

    - by Astro
    I have PDF files that are maritime charts, For example this one from the Delaware Bay http://ocsdata.ncd.noaa.gov/BookletChart/12304_BookletChart_HomeEd.pdf There is a lot of detailed information in the image. When I show them on a monitor the details are shown. When I put them on the Nook they are sized to fit the Nook screen (about 3*5) and all the details are lost. Since the Nook pdf reader does not have a zoom feature, the charts are unreadable. I'd like to convert them to fit on the Nook screen(600*800px). This means the images would need to be sliced into multiple images. (ie a single image (page) would get split into 5 rows and 4 columns) In a perfect world they would get converted with a slight overlap, left to right, top to bottom. I could then page through the smaller slices to find the section I need. I had used PaperCrop in the past to convert two col PDF's (highly recommended) but it can't do anything with the images. ImageMagick does a lot with combining images, but I didn't see any split options to split the image into tiles. Windows program preferred. Thanks!

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >