Search Results

Search found 772 results on 31 pages for 'opposite of you'.

Page 29/31 | < Previous Page | 25 26 27 28 29 30 31  | Next Page >

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • How can I create a Base64-Encoded string from an GDI+ Image in C++?

    - by Schnapple
    I asked a question recently, How can I create an Image in GDI+ from a Base64-Encoded string in C++?, which got a response that led me to the answer. Now I need to do the opposite - I have an Image in GDI+ whose image data I need to turn into a Base64-Encoded string. Due to its nature, it's not straightforward. The crux of the issue is that an Image in GDI+ can save out its data to either a file or an IStream*. I don't want to save to a file, so I need to use the resulting stream. Problem is, this is where my knowledge breaks down. This first part is what I figured out in the other question // Initialize GDI+. GdiplusStartupInput gdiplusStartupInput; ULONG_PTR gdiplusToken; GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL); // I have this decode function from elsewhere std::string decodedImage = base64_decode(Base64EncodedImage); // Allocate the space for the stream DWORD imageSize = decodedImage.length(); HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE, imageSize); LPVOID pImage = ::GlobalLock(hMem); memcpy(pImage, decodedImage.c_str(), imageSize); // Create the stream IStream* pStream = NULL; ::CreateStreamOnHGlobal(hMem, FALSE, &pStream); // Create the image from the stream Image image(pStream); // Cleanup pStream->Release(); GlobalUnlock(hMem); GlobalFree(hMem); (Base64 code) And now I'm going to perform an operation on the resulting image, in this case rotating it, and now I want the Base64-equivalent string when I'm done. // Perform operation (rotate) image.RotateFlip(Gdiplus::Rotate180FlipNone); IStream* oStream = NULL; CLSID tiffClsid; GetEncoderClsid(L"image/tiff", &tiffClsid); // Function defined elsewhere image.Save(oStream, &tiffClsid); // And here's where I'm stumped. (GetEncoderClsid) So what I wind up with at the end is an IStream* object. But here's where both my knowledge and Google break down for me. IStream shouldn't be an object itself, it's an interface for other types of streams. I'd go down the road from getting string-Image in reverse, but I don't know how to determine the size of the stream, which appears to be key to that route. How can I go from an IStream* to a string (which I will then Base64-Encode)? Or is there a much better way to go from a GDI+ Image to a string?

    Read the article

  • Working with friends. Poor career choice?

    - by a_person
    Hi all, Hope you can help me solve somewhat of a moral dilemma. Some time ago, after just a few years of living in U.S. and having to take any job I could get my hands on a friend of mine submitted recommended me for an open position at the company that he was working for. I could have not been happier. I do not have a degree of any sort, however, by being passionate about CS and with constant drive for self education I've became a somewhat of a strong generalist. Every place I worked for recognized me for that quality and used me on various projects where set of technology in hand had no overlap with set of knowledge of the team members. Rapidly I've advanced to Sr. Programmer position and the trend of me following a friend from one place to another have started and continued on for a few years. My friend's goal always been to become an IT Director, mine is to become the best programmer I can be. To my knowledge I've accommodated his goals as much as I could by taking a back seat, and letting him take the lead. Fast forward to today. He's a manager, and I am on his team. I am unhappy and I in considerable amount of suffering. I am not being utilized to my potential, it's almost exact opposite, I am being micromanaged to an unhealthy extent, my decisions, and suggestions are constantly met with negative connotation. Last week I had to hear about how my friend is a better programmer than I am. My ego was ecstatic about this one /s. In addition to that working in the field of BI have exhausted itself for most parts. The only pleasure of my work is being derived from making everything as dynamic and parameter driven as possible. This is the only area where a friend of mine does not feel competent enough to actually micromanage. Because of my situation I feel a fair amount of guilt and ever growing resentment. I need your advice, maybe you've dealt with this expression of ego before, needs of self vs the needs of your friend. Is working with a friend a poor choice? Thank you for reading in.

    Read the article

  • ASP.NET MVC: How can I explain an invalid type violation to an end-user with Html.ValidationSummary?

    - by Terminal Frost
    Serious n00b warning here; please take mercy! So I finished the Nerd Dinner MVC Tutorial and I'm now in the process of converting a VB.NET application to ASP.NET MVC using the Nerd Dinner program as a sort of rough template. I am using the "IsValid / GetRuleViolations()" pattern to identify invalid user input or values that violate business rules. I am using LINQ to SQL and am taking advantage of the "OnValidate()" hook that allows me to run the validation and throw an application exception upon trying to save changes to the database via the CustomerRepository class. Anyway, everything works well, except that by the time the form values reach my validation method invalid types have already been converted to a default or existing value. (I have a "StreetNumber" property that is an integer, though I imagine this would be a problem for DateTime or any other non-strings as well.) Now, I am guessing that the UpdateModel() method throws an exception and then alters the value because the Html.ValidationMessage is displayed next to the StreetNumber field but my validation method never sees the original input. There are two problems with this: While the Html.ValidationMessage does signal that something is wrong, there is no corresponding entry in the Html.ValidationSummary. If I could even get the exception message to show up there indicating an invalid cast or something that would be better than nothing. My validation method which resides in my Customer partial class never sees the original user input so I do not know if the problem is a missing entry or an invalid type. I can't figure out how I can keep my validation logic nice and neat in one place and still get access to the form values. I could of course write some logic in the View that processes the user input, however that seems like the exact opposite of what I should be doing with MVC. Do I need a new validation pattern or is there some way to pass the original form values to my model class for processing? CustomerController Code // POST: /Customers/Edit/[id] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection formValues) { Customer customer = customerRepository.GetCustomer(id); try { UpdateModel(customer); customerRepository.Save(); return RedirectToAction("Details", new { id = customer.AccountID }); } catch { foreach (var issue in customer.GetRuleViolations()) ModelState.AddModelError(issue.PropertyName, issue.ErrorMessage); } return View(customer); }

    Read the article

  • ActionResult - Service

    - by cem
    I bored, writing same code for service and ui. Then i tried to write a converter for simple actions. This converter, converting Service Results to MVC result, seems like good solution for me but anyway i think this gonna opposite MVC pattern. So here, I need help, what you think about algorithm - is this good or not? Thanks ServiceResult - Base: public abstract class ServiceResult { public static NoPermissionResult Permission() { return new NoPermissionResult(); } public static SuccessResult Success() { return new SuccessResult(); } public static SuccessResult<T> Success<T>(T result) { return new SuccessResult<T>(result); } protected ServiceResult(ServiceResultType serviceResultType) { _resultType = serviceResultType; } private readonly ServiceResultType _resultType; public ServiceResultType ResultType { get { return _resultType; } } } public class SuccessResult<T> : ServiceResult { public SuccessResult(T result) : base(ServiceResultType.Success) { _result = result; } private readonly T _result; public T Result { get { return _result; } } } public class SuccessResult : SuccessResult<object> { public SuccessResult() : this(null) { } public SuccessResult(object o) : base(o) { } } Service - eg. ForumService: public ServiceResult Delete(IVUser user, int id) { Forum forum = Repository.GetDelete(id); if (!Permission.CanDelete(user, forum)) { return ServiceResult.Permission(); } Repository.Delete(forum); return ServiceResult.Success(); } Controller: public class BaseController { public ActionResult GetResult(ServiceResult result) { switch (result.ResultType) { case ServiceResultType.Success: var successResult = (SuccessResult)result; return View(successResult.Result); break; case ServiceResultType.NoPermission: return View("Error"); break; default: return View(); break; } } } [HandleError] public class ForumsController : BaseController { [ValidateAntiForgeryToken] [Transaction] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Delete(int id) { ServiceResult result = ForumService.Delete(WebUser.Current, id); /* Custom result */ if (result.ResultType == ServiceResultType.Success) { TempData[ControllerEnums.GlobalViewDataProperty.PageMessage.ToString()] = "The forum was successfully deleted."; return this.RedirectToAction(ec => Index()); } /* Custom result */ /* Execute Permission result etc. */ TempData[ControllerEnums.GlobalViewDataProperty.PageMessage.ToString()] = "A problem was encountered preventing the forum from being deleted. " + "Another item likely depends on this forum."; return GetResult(result); } }

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • circles and triangles problem

    - by Faken
    Hello everyone, I have an interesting problem here I've been trying to solve for the last little while: I have 3 circles on a 2D xy plane, each with the same known radius. I know the coordinates of each of the three centers (they are arbitrary and can be anywhere). What is the largest triangle that can be drawn such that each vertice of the triangle sits on a separate circle, what are the coordinates of those verticies? I've been looking at this problem for hours and asked a bunch of people but so far only one person has been able to suggest a plausible solution (though i have no way of proving it). The solution that we have come up with involves first creating a triangle about the three circle centers. Next we look at each circle individually and calculate the equation of a line that passes through the circle's center and is perpendicular to the opposite edge. We then calculate two intersection points of the circle. This is then done for the next two circles with a result of 6 points. We iterate over the 8 possible 3 point triangles that these 6 points create (the restriction is that each point of the big triangle must be on a separate circle) and find the maximum size. The results look reasonable (at least when drawn out on paper) and it passes the special case of when the centers of the circles all fall on a straight line (gives a known largest triangle). Unfortunate i have no way of proving this is correct or not. I'm wondering if anyone has encountered a problem similar to this and if so, how did you solve it? Note: I understand that this is mostly a math question and not programming, however it is going to be implemented in code and it must be optimized to run very fast and efficient. In fact, I already have the above solution in code and tested to be working, if you would like to take a look, please let me know, i chose not to post it because its all in vector form and pretty much impossible to figure out exactly what is going on (because it's been condensed to be more efficient). Lastly, yes this is for school work, though it is NOT a homework question/assignment/project. It's part of my graduate thesis (abet a very very small part, but still technically is part of it). Thanks for your help.

    Read the article

  • What does Apache need to support both mysqli and PDO?

    - by Nathan Long
    I'm considering changing some PHP code to use PDO for database access instead of mysqli (because the PDO syntax makes more sense to me and is database-agnostic). To do that, I'd need both methods to work while I'm making the changeover. My problem is this: so far, either one or the other method will crash Apache. Right now I'm using XAMPP in Windows XP, and PHP Version 5.2.8. Mysqli works fine, and so does this: $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database'; $sql = "SELECT * FROM `employee`"; But this line makes Apache crash: $dbc->query($sql); I don't want to redo my entire Apache or XAMPP installation, but I'd like for PDO to work. So I tried updating libmysql.dll from here, as oddvibes recommended here. That made my simple PDO query work, but then mysqli queries crashed Apache. (I also tried the suggestion after that one, to update php_pdo_mysql.dll and php_pdo.dll, to no effect.) Test Case I created this test script to compare PDO vs mysqli. With the old copy of libmysql.dll, it crashes if $use_pdo is true and doesn't if it's false. With the new copy of libmysql.dll, it's the opposite. if ($use_pdo){ $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database<br />'; $sql = "SELECT * FROM `employee`"; $dbc->query($sql); foreach ($dbc->query($sql) as $row){ echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } else { $dbc = @mysqli_connect($hostname, $username, $password, $dbname) OR die('Could not connect to MySQL: ' . mysqli_connect_error()); $sql = "SELECT * FROM `employee`"; $result = @mysqli_query($dbc, $sql) or die(mysqli_error($dbc)); while ($row = mysqli_fetch_array($result,MYSQLI_ASSOC)) { echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } What does Apache need in order to support both methods of database query?

    Read the article

  • Regular expression from font to span (size and colour) and back (VB.NET)

    - by chapmanio
    Hi, I am looking for a regular expression that can convert my font tags (only with size and colour attributes) into span tags with the relevant inline css. This will be done in VB.NET if that helps at all. I also need a regular expression to go the other way as well. To elaborate below is an example of the conversion I am looking for: <font size="10">some text</font> To then become: <span style="font-size:10px;">some text</span> So converting the tag and putting a "px" at the end of whatever the font size is (I don't need to change/convert the font size, just stick px at the end). The regular expression needs to cope with a font tag that only has a size attribute, only a color attribute, or both: <font size="10">some text</font> <font color="#000000">some text</font> <font size="10" color="#000000">some text</font> <font color="#000000" size="10">some text</font> I also need another regular expression to do the opposite conversion. So for example: <span style="font-size:10px;">some text</span> Will become: <font size="10">some text</font> As before converting the tag but this time removing the "px", I don't need to worry about changing the font size. Again this will also need to cope with the size styling, font styling, and a combination of both: <span style="font-size:10px;">some text</span> <span style="color:#000000;">some text</span> <span style="font-size:10px; color:#000000;">some text</span> <span style="color:#000000; font-size:10px;">some text</span> I apprecitate this is a lot to ask, I am hopeless with regular expressions and need to find a way of making these conversions in my code. Thanks so much to anyone that can/is willing to help me!

    Read the article

  • JPA Database strcture for internationalisation

    - by IrishDubGuy
    I am trying to get a JPA implementation of a simple approach to internationalisation. I want to have a table of translated strings that I can reference in multiple fields in multiple tables. So all text occurrences in all tables will be replaced by a reference to the translated strings table. In combination with a language id, this would give a unique row in the translated strings table for that particular field. For example, consider a schema that has entities Course and Module as follows :- Course int course_id, int name, int description Module int module_id, int name The course.name, course.description and module.name are all referencing the id field of the translated strings table :- TranslatedString int id, String lang, String content That all seems simple enough. I get one table for all strings that could be internationalised and that table is used across all the other tables. How might I do this in JPA, using eclipselink 2.4? I've looked at embedded ElementCollection, ala this... JPA 2.0: Mapping a Map - it isn't exactly what i'm after cos it looks like it is relating the translated strings table to the pk of the owning table. This means I can only have one translatable string field per entity (unless I add new join columns into the translatable strings table, which defeats the point, its the opposite of what I am trying to do). I'm also not clear on how this would work across entites, presumably the id of each entity would have to use a database wide sequence to ensure uniqueness of the translatable strings table. BTW, I tried the example as laid out in that link and it didn't work for me - as soon as the entity had a localizedString map added, persisting it caused the client side to bomb but no obvious error on the server side and nothing persisted in the DB :S I been around the houses on this about 9 hours so far, I've looked at this Internationalization with Hibernate which appears to be trying to do the same thing as the link above (without the table definitions it hard to see what he achieved). Any help would be gratefully achieved at this point... Edit 1 - re AMS anwser below, I'm not sure that really addresses the issue. In his example it leaves the storing of the description text to some other process. The idea of this type of approach is that the entity object takes the text and locale and this (somehow!) ends up in the translatable strings table. In the first link I gave, the guy is attempting to do this by using an embedded map, which I feel is the right approach. His way though has two issues - one it doesn't seem to work! and two if it did work, it is storing the FK in the embedded table instead of the other way round (I think, I can't get it to run so I can't see exactly how it persists). I suspect the correct approach ends up with a map reference in place of each text that needs translating (the map being locale-content), but I can't see how to do this in a way that allows for multiple maps in one entity (without having corresponding multiple columns in the translatable strings table)...

    Read the article

  • Do you use logical negation operator (!) in "if" statement or check on "== false"

    - by Taras Terebkov
    Hello everyone, I just want to conduct a little survey about code style developers prefer. For me there are two ways to write "if" in such languages as Java, C#, C++, etc. (1) Logical negation operator public void foo() { if (!SessionManager.getInstance().hasActiveSession()) { . . . . . } } (2) Check on "false" public void foo() { if (SessionManager.getInstance().hasActiveSession() == false) { . . . . . } } I always believe that first way is much worst then the second one. Cause usually you don't "read" the code, but "recognize" it in one brief look. And exclamation symbol slipped from your mind, just disturbing you somewhere on the bottom of your unconscious. And only during reading the "if" block below you understand, that the logic is opposite - no sessions in "if" On the other hand in the second way of writing, an eye immediately catches words "SessionManager", "hasActiveSession" and "false". Also for me, the situation with "true" is different. In code like class SessionManager { private bool hasSession; public void foo() { if (hasSession == true) { . . . . . } else { . . . . . } } } I find "true" superfluous. why we repeating the sentence two times? The following is shorter and quicker to catch. class SessionManager { private bool hasSession; public void foo() { if (hasSession) { . . . . . } else { . . . . . } } } What do YOU think, guys?

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • SCOM 2012 DNS Forwarder Availability Monitor

    - by Massimo
    Background: I have an environment with two different AD domains, each in its own forest, each with two Windows Server 2008 R2 domain controllers acting as DNS servers. There is no trust between the domains. Each DNS server manages the main DNS zone for its AD domain, and then some other zones, including the reverse lookup zone for its IP subnets; all zones are AD-integrated; all DNS servers which manages a zone are correctly listed as authoritative name servers for that zone. So, the situation is like this (using fake names and IP addresses): Domain A: DNS domain: a.dom IP subnet: 192.168.1.X DC/DNS Servers: serverA1.a.dom (192.168.1.1) and serverA2.a.dom (192.168.1.2) Authoritative zones: a.dom, 1.168.192.in-addr.arpa, somezone.local Domain B: DNS domain: b.dom IP subnet: 10.0.0.X DC/DNS Servers: serverB1.b.dom (10.0.0.1) and serverB2.b.dom (10.0.0.2) Authoritative zones: b.dom, 0.0.10.in-addr.arpa, someotherzone.local DNS servers in domain A have conditional forwarders defined for each zone managed by DNS servers in domain B, forwarding to both domain B's DNS servers; DNS servers in domain B have the opposite configuration. All forwarders are stored in Active Directory. All is working perfectly, and computers in each domain can resolve forward and reverse DNS queries for both domains, using their domain's DNS servers. The problem: I have SCOM 2012 deployed in domain A, with the SCOM agent installed on both DCs; the management packs for Active Directory and DNS Server are installed and up-to-date. I have a series of alerts like the following ones on both domain controllers; each alert is generated for each forwarded zone and for each forwarded server: Forwarder someotherzone.local (10.0.0.1) cannot resolve the host name 192.168.1.1,someotherzone.local for serverA1.a.dom Forwarder someotherzone.local (10.0.0.2) cannot resolve the host name 192.168.1.1,someotherzone.local for serverA1.a.dom Forwarder someotherzone.local (10.0.0.1) cannot resolve the host name 192.168.1.2,someotherzone.local for serverA2.a.dom Forwarder someotherzone.local (10.0.0.2) cannot resolve the host name 192.168.1.2,someotherzone.local for serverA2.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.1) cannot resolve the host name 192.168.1.1,0.0.10.in-addr.arpa for serverA1.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.2) cannot resolve the host name 192.168.1.1,0.0.10.in-addr.arpa for serverA1.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.1) cannot resolve the host name 192.168.1.2,0.0.10.in-addr.arpa for serverA2.a.dom Forwarder 0.0.10.in-addr.arpa (10.0.0.2) cannot resolve the host name 192.168.1.2,0.0.10.in-addr.arpa for serverA2.a.dom The only exception is the main AD DNS zone managed by domain B's DNS servers (b.dom): for that conditional forwarder, no alert is generated and the forwarder availability monitor is green. Ok, what does this mean? What are those monitors trying to tell me? What are they checking? What's actually wrong? And why there is no error for the "b.dom" zone, which is configured in the exact same way as the other ones, both as a zone in domain B's DNS servers and as a forwarder in domain A's DNS servers?

    Read the article

  • Mod_Rewrite Apache ProxyPass ?

    - by Anon
    I have two websites; OLDSITE and NEWSITE. The OLDSITE has 120 IP Address that it has with it, and the NEWSITE had 5. I want to be able to separate everything from OLDSITE and NEWSITE so they are not tied together but use them on the same linux computer. My current apache setup is this: Listen 80 NameVirtualHost * <VirtualHost *> ServerName oldsite.com ServerAdmin [email protected] DocumentRoot /var/www/ <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.oldsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.oldsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.oldsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost newsite.com> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ <Directory /var/newsite/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} ^([^.]+)\.newsite\.com$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule ^([^.]+)\.newsite\.com/media/(.*) /home/$1/dir/media/$2 RewriteRule ^([^.]+)\.newsite\.com/(.*) /home/$1/www/$2 </VirtualHost> <VirtualHost *> ServerName panel.oldsite.com ProxyPass / http://panel.oldsite.com:10000/ ProxyPassReverse / http://panel.oldsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> <VirtualHost *> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ <Proxy *> allow from all </Proxy> </VirtualHost> I want to be able to access anything that is newsite.com and have it go to the /var/newsite unless their is a home directory...and then if its panel.newsite.com I want it to automatically do a proxypass to panel.newsite.com:10000... With this setup, it works perfect for oldsite.com.... both the proxy and the webpages... However, having the Virtualhost set to newsite.com renders the proxypass worthless. If I change the Virtualhost for the newsite.com to a wildcard, the proxypass will work but anything thats a subdomain of newsite.com won't work. so newsite.com will work, but www.newsite.com will not load correctly. I am assuming that when everything is wildcarded, then the ServerName somewhat acts like a RewriteCond and actually just applies the stuff to that URL. It uses the Virtualhost * (oldsite.com) and lets ANYTHING.oldsite.com work, but the second virtualhost * (newsite.com) only newsite.com will work... www.newsite.com will not. If I change the order of them, the opposite is true. So apparently it doesn't like me using 2 wildcards... I tried just making the Servername *.newsite.com .......but that would be too easy. I am not sure what I can do to do what I want? Perhaps I should make the ProxyPass included in the VirtualHosts and use something like: RewriteCond %{HTTP_HOST} ^panel\.newsite\.com$ [NC] RewriteRule ^(.*)$ http://panel.newsite.com:10000/ [P] ProxyPassReverse / http://panel.newsite.com:10000/ but that doesnt seem to want to login to webmin, it loads the login page but isnt working how the ProxyPass & ProxyPassReverse does.

    Read the article

  • Mac Terminal.app: Force '^C' to be printed when editing current prompt, then aborting it

    - by Stefan Lasiewski
    This is the opposite of Prevent “^C” from being printed when aborting editing current prompt. I'm using Bash. When I'm editing the commandline in Bash, and I hit Control-C to abort the commandline, the '^C' character does not display. I would like to see this character. I tried commands like stty -ctlecho and stty ctlecho (which I borrowed from the other question), but this didn't work for me. This behavior seems to be true with my environment on Ubuntu, CentOS and MacOSX. This only happens within Apple's Terminal.App. If I SSH to a remote Linux or FreeBSD box, then ^C is printed. So, this is clearly just a local setting. Update: Here is the output of stty -a, as requested by @quack quixote : $ stty -a speed 9600 baud; 41 rows; 88 columns; lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel iutf8 -ignbrk brkint -inpck -ignpar -parmrk oflags: opost onlcr -oxtabs -onocr -onlret cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>; eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T; stop = ^S; susp = ^Z; time = 0; werase = ^W; After typing stty sane, stty -a will output the following. The only difference is the parameter of -iutf8. $ stty sane $ stty -a speed 9600 baud; 41 rows; 157 columns; lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk oflags: opost onlcr -oxtabs -onocr -onlret cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>; eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V; min = 1; quit = ^\; reprint = ^R; start = ^Q; status = ^T; stop = ^S; susp = ^Z; time = 0; werase = ^W;

    Read the article

  • Asus P8P67 Rev. 3.1 Motherboard issues powering on and saving settings

    - by Scott
    Edit: New Information Have some updated information from the old question below: So basically my issue right now is somewhat similar, but I've been able to rule out a couple of things. I don't think this has anything to do with light on the motherboard. No matter what lights are on/off on the motherboard when the computer is off, they don't affect this issue. The main power LED on the Mobo is always lit when the power supply is turned on, and that's what matters anyway. Even when the main power LED is on, the PC will NOT boot up the first time I hit the power switch. I have to go reset the power supply (make all lights turn off on the Mobo and back on), and THEN hit the power switch. Then everything boots up. Also, the BIOS settings are reset every time this happens. Asus Tech Support told me to try jumping the power with something metal to try and rule out that it's a problem with the connectors getting power, or if it's a problem with the case power switch pins - haven't done that yet though. Any ideas? This is a lot simpler than it was before when I thought it had to do with certain LED indicators for RAM, EPU, etc. Original Question So I built my new desktop just about 3 weeks ago. I've been having a few issues which I think are all related to my motherboard, an Asus P8P67 Revision 3.1, but I'm not 100% sure as this is really the first from-scratch build I've ever done. I've posted these questions on the Asus forums, Asus Tech Support, and the Corsair forums as well as I thought it might have something to do with my power supply at one point. None of these avenues have solved my issue until now completely, so I thought I'd come here to see what you guys think. Here's what's happening: My computer is off, and I go to power it on. I press the power switch on the case (Antec Nine Hundred), and nothing seems to happen. Upon further inspection, I see that what this actually does is simply turn on the EPU LED on my motherboard, but doesn't actually boot anything up. I then have to go and flip the main power switch on the power supply off and back on. What this does is turn off all lights on the Motherboard after a few seconds, and turn them all back on (including the EPU LED that was off before I hit the power switch the first time). Now, hitting the power switch works. The machine boots up fine, and starts going through the boot up process. As a side note: My Motherboard is set to "Force BIOS", and every single time I change this to do the opposite, the next time my computer boots up that change reverts itself. I think this may be due to the fact that I am doing the hard reset on the power supply each time, but I'm not sure. I had thought that the Motherboard would keep its BIOS settings unless you did something to the Mobo itself - so this may be a related issue, or something else completely. That's basically it. Once it's on, it's on. It works fine, recognizes all of my hardware, and runs great. All fans/lights in the case work great, and I'm getting standard readings. The next time I go to shut the computer down however, I can expect the same exact process getting it up and running, including being forced to go into BIOS and exit again before I can load Windows. Another side note: If I power on my computer using the power switch DIRECTLY after shutting it down, it powers right back on (I think this is because the EPU LED light doesn't have time to turn off). It looks as if as long as the EPU LED is lit up on the motherboard before I hit the power switch on the case, the thing will boot up fine (although this doesn't explain the "Force BIOS" issue, at least it's something). Any ideas? Thanks guys. P.S. - System Specs Asus P8P67 Rev. 3.1 Motherboard Intel Core i7 2600K Processor 16GB (4x4GB) G-Skill 1600 RAM NVIDIA EVGA GTX 570 Video Card Crucial 128GB SSD HD Corsair 850W Power Supply Seagate 2TB HDD

    Read the article

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • Best of Breed vs. Suite – Oracle’s SaaS Delivers Both

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The debate of which is better: “best of breed” business applications vs. an integrated suite is certainly not a new conversation. This has been argued between IT vendors and CIOs for years. It’s also important to clarify that “best of breed” does not necessarily translate into being the richest functionality; rather it’s often about just having the best fit solution to solve a specific business problem or need. So what does cloud have to do with the niche vs. suite debate? Consuming business applications in a cloud or SaaS deployment model can change the best of breed vs. suite discussion - if the cloud is done right. It’s having your cake and eating it too only better: you don’t have to gather all the ingredients or wait to bake your cake, and you can adjust how big of slice you take. Before you eat, it’s worth pausing to recall much of what we learned about IT over the last decade. These basic IT principles still hold true even though the financial model has changed from buying to renting. In other words, what’s under the technology hood still matters. Architecture and development methodologies like building an application based on open standards so it works with other systems - is still important. Data and information silos, complex integrations, and proprietary technologies that lock you in, are still bad. While some may argue that IT no longer matters with cloud, the opposite is actually true. If anything cloud can help return IT back to its rightful place as key strategic asset vs. a liability on the balance sheet. The “I” in CIO was never meant to stand for “integration” yet it’s amazing how much time and money is poured into these types of initiatives for most organizations each year. Rather the “I” needs to stand for “innovation”. This is where Oracle SaaS can uniquely help. Oracle’s application strategy has not really changed over the years. It’s always been about bringing the best and richest functionality across the enterprise to our customers while leveraging a common, standards-based, and enterprise-grade platform. So not jut best fit, but the best capabilities based on the input of thousands of enterprise customers across the globe. Oracle invests billions in R&D every year to add new capabilities to the broadest cloud portfolio in the industry, spanning across functional pillars like CRM, HCM, ERP, etc. And where it makes sense, Oracle combines key strategic acquisitions to complement organic functionality. The result is best of breed delivered in a suite. Again this is not something new. The game changer now with cloud is that it impacts HOW Oracle customers adopt the richest, most modern applications across the business – and continue on getting it. Consuming oracle applications in the cloud means you can adopt new capabilities and updates very quickly and easily. There’s no hardware to buy or software to manage. Oracle does it for you. Low upfront costs and an OpEx financial model is the easy part. Oracle Cloud Applications take it a big step further. For organizations that demand having the latest and richest functionality and accelerating the time to value from their IT investment, Oracle Cloud is the right path. It’s about holistically changing the “hows” and the “whys” of the organization by leveraging transformational innovations like social, mobile, and big data in a consistent and more powerful way. Not just about sales force automation or talent management. These technologies should impact all parts of the company and Oracle Cloud is the enterprise-grade delivery vehicle. Oracle SaaS helps break down barriers of adoption and is eases the headache of upgrades, investing in new supporting hardware, or adding internal expertise to manage it all. With Oracle Cloud, customers can get best of breed capabilities in either a full suite model or a la carte. And because it’s entirely built on open standards, it’s built to co-exist with existing IT investments. Updates can be automatic or delayed based on a customer’s requirements. And it’s complete – a full suite of cross pillar functionality. Even better, if you don’t like it, need more or less, just turn the dial up or down. Just like your utility bill, you pay for what you use, and can consume more or less power whenever you need it. Lower cost, lower investment risk, without compromising on functionality, security, or performance. Technology still matters in the cloud. So our cloud customers also like that when they adopt our cloud applications, they also get the best underlying technology, from the middleware and database platform down to infrastructure and Oracle’s engineered systems. Therefore it’s not just the greatest and latest in application functionality, but everything underneath that makes it work is also the latest and greatest. The best of breed technology stack powering best of breed business applications, and all delivered in a subscription based model. The best of both worlds. Yep, that’s the idea.

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • Can Microsoft Build Appliances?

    - by andrewbrust
    Billy Hollis, my Visual Studio Live! colleague and fellow Microsoft Regional Director said recently, and I am paraphrasing, that the computing world, especially on the consumer side, has shifted from one of building hardware and software that makes things possible to do, to building products and technologies that make things easy to do.  Billy crystalized things perfectly, as he often does. In this new world of “easy to do,” Apple has done very well and Microsoft has struggled.  In the old world, customers wanted a Swiss Army Knife, with the most gimmicks and gadgets possible.  In the new world, people want elegantly cutlery.  They may want cake cutters and utility knives too, but they don’t want one device that works for all three tasks.  People don’t want tools, they want utensils.  People don’t want machines.  They want appliances. Microsoft Appliances: They Do Exist Microsoft has built a few appliance-like devices.  I would say XBox 360 is an appliance,  It’s versatile, mind you, but it’s the kind of thing you plug in, turn on and use, as opposed to set-up, tune, and open up to upgrade the internals.  Windows Phone 7 is an appliance too.  It’s a true smartphone, unlike Windows Mobile which was a handheld computer with a radio stack.  Zune is an appliance too, and a nice one.  It hasn’t attained much traction in the market, but that’s probably because the seminal consumer computing appliance -- the iPod – got there so much more quickly. In the embedded world, Mediaroom, Microsoft’s set-top product for the cable industry (used by AT&T U-Verse and others) is an appliance.  So is Microsoft’s Sync technology, used in Ford automobiles.  Even on the enterprise side, Microsoft has an appliance: SQL Server Parallel Data Warehouse Edition (PDW) combines Microsoft software with select OEMs’ server, networking and storage hardware.  You buy the appliance units from the OEMs, plug them in, connect them and go. I would even say that Bing is an appliance.  Not in the hardware sense, mind you.  But from the software perspective, it’s a single-purpose product that you visit or run, use and then move on.  You don’t have to install it (except the iOS and Android native apps where it’s pretty straightforward), you don’t have to customize it, you don’t have to program it.  Basically, you just use it. Microsoft Appliances that Should Exist But Microsoft builds a bunch of things that are not appliances.  Media Center is not an appliance, and it most certainly should be.  Instead, it’s an app that runs on Windows 7.  It runs full-screen and you can use this configuration to conceal the fact that Windows is under it, but eventually something will cause you to abandon that masquerade (like Patch Tuesday). The next version of Windows Home Server won’t, in my opinion, be an appliance either.  Now that the Drive Extender technology is gone, and users can’t just add and remove drives into and from a single storage pool, the product is much more like a IT server and less like an appliance-premised one.  Much has been written about this decision by Microsoft.  I’ll just sum it up in one word: pity. Microsoft doesn’t have anything remotely appliance-like in the tablet category, either.  Until it does, it likely won’t have much market share in that space either.  And of course, the bulk of Microsoft’s product catalog on the business side is geared to enterprise machines and not personal appliances. Appliance DNA: They Gotta Have It. The consumerization of IT is real, because businesspeople are consumers too.  They appreciate the fit and finish of appliances at home, and they increasingly feel entitled to have it at work too.  Secure and reliable push email in a smartphone is necessary, but it isn’t enough.  People want great apps and a pleasurable user experience too.  The full Microsoft Office product is needed at work, but a PC with a keyboard and mouse, or maybe a touch screen that uses a stylus (or requires really small fingers), to run Office isn’t enough either.  People want a flawless touch experience available for the times they want to read and take quick notes.  Until Microsoft realizes this fully and internalizes it, it will suffer defeats in the consumer market and even setbacks in the business market.  Think about how slow the Office upgrade cycle is…now imagine if the next version of Office had a first-class alternate touch UI and consider the possible acceleration in adoption rates. Can Microsoft make the appliance switch?  Can the appliance mentality become pervasive at the company?  Can Microsoft hasten its release cycles dramatically and shed the “some assembly required” paradigm upon which many of its products are based?  Let’s face it, the chances that Microsoft won’t make this transition are significant. But there are also encouraging signs, and they should not be ignored.  The appliances we have already discussed, especially Xbox, Zune and Windows Phone 7, are the most obvious in this regard.  The fact that SQL Server has an appliance SKU now is a more subtle but perhaps also more significant outcome, because that product sits so smack in the middle of Microsoft’s enterprise stack.  Bing is encouraging too, especially given its integrated travel, maps and augmented reality capabilities.  As Bing gains market share, Microsoft has tangible proof that it can transform and win, even when everyone outside the company, and many within it, would bet otherwise. That Great Big Appliance in the Sky Perhaps the most promising (and evolving) proof points toward the appliance mentality, though, are Microsoft’s cloud offerings -- Azure and BPOS/Office 365.  While the cloud does not represent a physical appliance (quite the opposite in fact) its ability to make acquisition, deployment and use of technology simple for the user is absolutely an embodiment of the appliance mentality and spirit.  Azure is primarily a platform as a service offering; it doesn’t just provide infrastructure.  SQL Azure does likewise for databases.  And Office 365 does likewise for SharePoint, Exchange and Lync. You don’t administer, tune and manage servers; instead, you create databases or site collections or mailboxes and start using them. Upgrades come automatically, and it seems like releases will come more frequently.  Fault tolerance and content distribution is just there.  No muss.  No fuss.  You use these services; you don’t have to set them up and think about them.  That’s how appliances work.  To me, these signs point out that Microsoft has the full capability of transforming itself.  But there’s a lot of work ahead.  Microsoft may say they’re “all in” on the cloud, but the majority of the company is still oriented around its old products and models.  There needs to be a wholesale cultural transformation in Redmond.  It can happen, but product management, program management, the field and executive ranks must unify in the effort. So must partners, and even customers.  New leaders must rise up and Microsoft must be able to see itself as a winner.  If Microsoft does this, it could lock-in decades of new success, and be a standard business school case study for doing so.  If not, the company will have missed an opportunity, and may see its undoing.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Quaternion Camera Orbiting around a Sphere

    - by jessejuicer
    Background: I'm trying to create a game where the camera is always rotating around a single sphere. I'm using the DirectX D3DX math functions in C++ on Windows. The Problem: I cannot get both the camera position and orientation both working properly at the same time. Either one works but not both together. Here's the code for my quaternion camera that revolves around a sphere, always looking at the centerpoint of the sphere, ... as far as I understand it (but which isn't working properly): (I'm only going to present rotation around the X axis here, to simplify this post) Whenever the UP key is pressed or held down, the camera should rotate around the X axis, while looking at the centerpoint of the sphere (which is at 0,0,0 in the world). So, I build a quaternion that represents a small angle of rotation around the x axis like this (where 'deltaAngle' is a small enough number for a slow rotation): D3DXVECTOR3 rotAxis; D3DXQUATERNION tempQuat; tempQuat.x = 0.0f; tempQuat.y = 0.0f; tempQuat.z = 0.0f; tempQuat.w = 1.0f; rotAxis.x = 1.0f; rotAxis.y = 0.0f; rotAxis.z = 0.0f; D3DXQuaternionRotationAxis(&tempQuat, &rotAxis, deltaAngle); ...and I accumulate the result into the camera's current orientation quat, like this: D3DXQuaternionMultiply(&cameraOrientationQuat, &cameraOrientationQuat, &tempQuat); ...which all works fine. Now I need to build a view matrix to pass to DirectX SetTransform function. So I build a rotation matrix from the camera orientation quat as follows: D3DXMATRIXA16 rotationMatrix; D3DXMatrixIdentity(&rotationMatrix); D3DXMatrixRotationQuaternion(&rotationMatrix, &cameraOrientationQuat); ...Now (as seen below) if I just transpose that rotationMatrix and plug it into the 3x3 section of the view matrix, then negate the camera's position and plug it into the translation section of the view matrix, the rotation magically works. Perfectly. (even when I add in rotations for all three axes). There's no gimbal lock, just a smooth rotation all around in any direction. BUT- this works even though I never change the camera's position. At all. Which sorta blows my mind. I even display the camera position and can watch it stay constant at it's starting point (0.0, 0.0, -4000.0). It never moves, but the rotation around the sphere is perfect. I don't understand that. For proper view rotation, the camera position should be revolving around the sphere. Here's the rest of building the view matrix (I'll talk about the commented code below). Note that the camera starts out at (0.0, 0.0, -4000.0) and m_camDistToTarget is 4000.0: /* D3DXVECTOR3 vec1; D3DXVECTOR4 vec2; vec1.x = 0.0f; vec1.y = 0.0f; vec1.z = -1.0f; D3DXVec3Transform(&vec2, &vec1, &rotationMatrix); g_cameraActor->pos.x = vec2.x * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.y = vec2.y * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.z = vec2.z * g_cameraActor->m_camDistToTarget; */ D3DXMatrixTranspose(&g_viewMatrix, &rotationMatrix); g_viewMatrix._41 = -g_cameraActor->pos.x; g_viewMatrix._42 = -g_cameraActor->pos.y; g_viewMatrix._43 = -g_cameraActor->pos.z; g_viewMatrix._44 = 1.0f; g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix ); ...(The world matrix is always an identity, and the perspective projection works fine). ...So, without the commented code being compiled, the rotation works fine. But to be proper, for obvious reasons, the camera position should be rotating around the sphere, which it currently is not. That's what the commented code is supposed to do. And when I add in that chunk of code to do that, and look at all the data as I hold the keys down (using UP, DOWN, LEFT, RIGHT to rotate different directions) all the values look correct! The camera position is rotating around the sphere just fine, and I can watch that happen visually too. The problem is that the camera orientation does not lookat the center of the sphere. It always looks straight forward down the z axis (toward positive z) as it revolves around the sphere. Yet the values of both the rotation matrix and the view matrix seem to be behaving correctly. (The view matrix orientation is the same as the rotation matrix, just transposed). For instance if I just hold down the key to spin around the x axis, I can watch the values of the three axes represented in the view matrix (x, y, and z axes)... view x-axis stays at (1.0, 0.0, 0.0), and view y-axis and z-axis both spin around the x axis just fine. All the numbers are changing as they should be... well, almost. As far as I can tell, the position of the view matrix is spinning around the sphere one direction (like clockwise), and the orientation (the axes in the view matrix) are spinning the opposite direction (like counter-clockwise). Which I guess explains why the orientation appears to stay straight ahead. I know the position is correct. It revolves properly. It's the orientation that's wrong. Can anyone see what am I doing wrong? Am I using these functions incorrectly? Or is my algorithm flawed? As usual I've been combing my code for simple mistakes for many hours. I'm willing to post the actual code, and a video of the behavior, but that will take much more effort. Thought I'd ask this way first.

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

< Previous Page | 25 26 27 28 29 30 31  | Next Page >