Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 676/956 | < Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >

  • Operation times out trying to SSH outside LAN i.e. from internet to LAN no connection is established

    - by Pelle L
    I run Ubuntu 12.04 and have no success connecting with SSH from "Internet". The router is a TL-MR3420 which is set up to forward requests to one of the NIC's on ubuntu machine (which has in total 3 NICs). I can SSH from a client on the "local" network/LAN. The forward mechanism in the router seems to work. If I stop SSH service on the Ubuntu machine and instead start one on the windows machine - it works like a charm. I do not use the Std port 22 but that shouldn't be an issue as far as I understand - sine it works on the same port on the win machine. Since my public IS isn't static I use a dynDNS service but as said earlier the same setup works from the win machine. The router is located on 192.168.0.1 The Ubuntu NICs has the following IP: eth2 192.168.0.100 , eth1 192.168.0.101 , eth0 192.168.0.102 and I have forwarded the "outside" request to 192.168.0.100 In regards for firewall settings on the Ubuntu machine I have disabled the ufw and the command ufw status give status: inactive. I don't now it this is relevant information but teh command iptables --list give: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have tried to catch traffic with help of wireshark (a tool I'm not too used to use) and it seems as a few (3?) "requests" actually reaches the NIC but ... nothing happens. The syslog does not show any entries during these attempts. Perhaps it could be some routing issues but I have reached my level of competence and are stuck ... all help and support to get this sorted out is much appreciated. I'm new to Linux so please do not assume I have a configuration that is correct - but as I wrote earlier - if the client that initiate SSH is on the LAN it all works. PS:I have also tried to get VPN (PPP) working from Internet with no success - once again VPN works on the windows machine ... so my best guess is that this is related to how the ubuntu machine handles (IP) traffic and not the TL-MR3420 router or other network issues.

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • Working with Windows and Unix

    - by user554629
    Beware of new line characters One of the most frequent issues we encounter in Tech Support is the corruption of files that are transferred between Windows and Unix.   The transfer can occur at any stage, but ultimately involves a transfer of a file using an ftp client that is running on Windows;  it could be ftp or filezilla. Windows uses two characters to mark the end of a line in a text file (CR/LF),carriage return, linefeed.   Unix uses a single character (CR). In all situations, it is best to use binary mode transfer for all files, including ascii text files. Common problems: upload a core file from unix to windows using ftp in ascii mode.The file is going to be larger on Windows than Unix.ftp doesn't know if this is a text file with real line-ends, it takes every ascii CR and transmits two ascii characters CR/LF.The core file, tar file, library ... will be corrupted when transferred to Oracle. download a shell script to Windows, and transfer it to Unix using ftpIf the file is edited on Windows, the unix script line-end chars will be doubled.Unix doesn't know how to handle that, and will likely tell you the script is not executable.Why?  The first line of a shell script ( called "sh-bang" ), identifies the command interpreter the unix shell should use for this script.   Common examples:#/bin/sh#/bin/ksh#/bin/bash#/bin/perl#/bin/sh^M    # will not be understood.#/bin/env ksh # special syntax.  Find ksh and run it dos2unix is a common utility found on most unix platforms, that repairs the issue of Windows LineEnd characters in unix script files.   I've written my own flavor of this utility for use in Tech Support and build environments, that is a bit easier to use, and has some nice side-effects. accepts a list of files:   dos2unix *.sh repairs the file in-place.  Doesn't generate a new file you have to name retains the same timestamp;  it is the encoding that changed, not the file content. Here are the versions of dos2unix for each of the environments we work in.They are compressed with gzip, to avoid the ftp ascii transfer trap,and because I am quite limited in the number of files I can upload to this blog. AIX Linux Solaris sparc  Windows 

    Read the article

  • Employer admits that its developers are underpaid and undervalued. Time to part ways?

    - by Psionic
    My employer recently posted an opening for a C# Developer with 3-5 years of experience. The requirements and expectations for the position were fair, up until the criteria for salary determination. It was stated clearly that compensation would depend ONLY on experience with C#, and that years of programming experience with other languages & frameworks would be considered irrelevant and not factored in. I brought up my concern with HR that good candidates would see this as a red flag and steer away. I attempted to explain that software development is about much more than specific languages, and that paying someone for their experience in a single language is a very shortsighted approach to hiring good developers (I'm telling this to the HR dept of a software company). The response: "We are tired of wasting time interviewing developers who expect 'big salaries' because they have lots of additional programming experience in languages other than what we require." The #1 issue here is that 'big salaries' = Market Rate. After some serious discussion, they essentially admitted that nobody at the company is paid near market rate for their skills, and there's nothing that can be done about it. The C-suite has the mentality that employees should only be paid for skills proven over years under their watch. Entry-level developers are picked up for less than $38K and may reach 50K after 3 years, which I'm assuming is around what they plan on offering candidates for the C# position. Another interesting discovery (not as relevant) - people 'promoted' to higher responsibilities do not get raises. The 'promotion' is considered an adjustment of the individuals' roles to better suit their 'strengths', which is what they're already being paid for. After hearing these hard truths straight from HR, I would assume that most people who are looking out for themselves would quickly begin searching for a new employer that has a better idea of what they're doing in the industry (this company fails in many other ways, but I don't want to write a book). Here is my dilemma however: This is the first official software development position I've held, for barely 1 year now. My previous position of 3 years was with a very small company where I performed many duties, among them software development (not in my official job description, but I tried very hard to make it so). I've identified local openings that I'm currently qualified for, most paying at least 50% more than I'm getting now. Question is, is it too soon for a jump? I am getting valuable experience in my current position, with no shortage of exciting projects. The work environment is very comfortable, and I'm told by many that I'm in the spotlight of the C-level guys for the stuff that I've been able to accomplish during my short time (for what that's worth). However, there is a clear opportunity cost to staying, knowing now with certainty that I will have to wait 3-5 years only to be capped at what I could potentially be earning elsewhere this year. I am also aware that 'job hopper' is a dangerous label to have, regardless of the reasons.

    Read the article

  • Multi-threaded JOGL Problem

    - by moeabdol
    I'm writing a simple OpenGL application in Java that implements the Monte Carlo method for estimating the value of PI. The method is pretty easy. Simply, you draw a circle inside a unit square and then plot random points over the scene. Now, for each point that is inside the circle you increment the counter for in points. After determining for all the random points wither they are inside the circle or not you divide the number of in points over the total number of points you have plotted all multiplied by 4 to get an estimation of PI. It goes something like this PI = (inPoints / totalPoints) * 4. This is because mathematically the ratio of a circle's area to a square's area is PI/4, so when we multiply it by 4 we get PI. My problem doesn't lie in the algorithm itself; however, I'm having problems trying to plot the points as they are being generated instead of just plotting everything at once when the program finishes executing. I want to give the application a sense of real-time display where the user would see the points as they are being plotted. I'm a beginner at OpenGL and I'm pretty sure there is a multi-threading feature built into it. Non the less, I tried to manually create my own thread. Each worker thread plots one point at a time. Following is the psudo-code: /* this part of the code exists in display() method in MyCanvas.java which extends GLCanvas and implements GLEventListener */ // main loop for(int i = 0; i < number_of_points; i++){ RandomGenerator random = new RandomGenerator(); float x = random.nextFloat(); float y = random.nextFloat(); Thread pointThread = new Thread(new PointThread(x, y)); } gl.glFlush(); /* this part of the code exists in run() method in PointThread.java which implements Runnable */ void run(){ try{ gl.glPushMatrix(); gl.glBegin(GL2.GL_POINTS); if(pointIsIn) gl.glColor3f(1.0f, 0.0f, 0.0f); // red point else gl.glColor3f(0.0f, 0.0f, 1.0f); // blue point gl.glVertex3f(x, y, 0.0f); // coordinates gl.glEnd(); gl.glPopMatrix(); }catch(Exception e){ } } I'm not sure if my approach to solving this issue is correct. I hope you guys can help me out. Thanks.

    Read the article

  • A Simple Entity Tagger

    - by Elton Stoneman
    In the REST world, ETags are your gateway to performance boosts by letting clients cache responses. In the non-REST world, you may also want to add an ETag to an entity definition inside a traditional service contract – think of a scenario where a consumer persists its own representation of your entity, and wants to keep it in sync. Rather than load every entity by ID and check for changes, the consumer can send in a set of linked IDs and ETags, and you can return only the entities where the current ETag is different from the consumer’s version.  If your entity is a projection from various sources, you may not have a persistent ETag, so you need an efficient way to generate an ETag which is deterministic, so an entity with the same state always generates the same ETag. I have an implementation for a generic ETag generator on GitHub here: EntityTagger code sample. The essence is simple - we get the entity, serialize it and build a hash from the serialized value. Any changes to either the state or the structure of the entity will result in a different hash. To use it, just call SetETag, passing your populated object and a Func<> which acts as an accessor to the ETag property: EntityTagger.SetETag(user, x => x.ETag); The implementation is all in at 80 lines of code, which is all pretty straightforward: var eTagProperty = AsPropertyInfo(eTagPropertyAccessor); var originalETag = eTagProperty.GetValue(entity, null); try { ResetETag(entity, eTagPropertyAccessor); string json; var serializer = new DataContractJsonSerializer(entity.GetType()); using (var stream = new MemoryStream()) { serializer.WriteObject(stream, entity); json = Encoding.UTF8.GetString(stream.GetBuffer(), 0, (int)stream.Length); } var guid = GetDeterministicGuid(json); eTagProperty.SetValue(entity, guid.ToString(), null); //... There are a couple of helper methods to check if the object has changed since the ETag value was last set, and to reset the ETag. This implementation uses JSON to do the serializing rather than XML. Benefit - should be marginally more efficient as your hashing a much smaller serialized string; downside, JSON doesn't include namespaces or class names at the root level, so if you have two classes with the exact same structure but different names, then instances which have the same content will have the same ETag. You may want that behaviour, but change to use the XML DataContractSerializer if you think that will be an issue. If you can persist the ETag somewhere, it will save you server processing to load up the entity, but that will only apply to scenarios where you can reliably invalidate your ETag (e.g. if you control all the entry points where entity contents can be updated, then you can calculate and persist the new ETag with each update).

    Read the article

  • Getting a SecurityToken from a RequestSecurityTokenResponse in WIF

    - by Shawn Cicoria
    When you’re working with WIF and WSTrustChannelFactory when you call the Issue operation, you can also request that a RequestSecurityTokenResponse as an out parameter. However, what can you do with that object?  Well, you could keep it around and use it for subsequent calls with the extension method CreateChannelWithIssuedToken – or can you? public static T CreateChannelWithIssuedToken<T>(this ChannelFactory<T> factory, SecurityToken issuedToken);   As you can see from the method signature it takes a SecurityToken – but that’s not present on the RequestSecurityTokenResponse class. However, you can through a little magic get a GenericXmlSecurityToken by means of the following set of extension methods below – just call rstr.GetSecurityTokenFromResponse() – and you’ll get a GenericXmlSecurityToken as a return. public static class TokenHelper { /// <summary> /// Takes a RequestSecurityTokenResponse, pulls out the GenericXmlSecurityToken usable for further WS-Trust calls /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static GenericXmlSecurityToken GetSecurityTokenFromResponse(this RequestSecurityTokenResponse rstr) { var lifeTime = rstr.Lifetime; var appliesTo = rstr.AppliesTo.Uri; var tokenXml = rstr.GetSerializedTokenFromResponse(); var token = GetTokenFromSerializedToken(tokenXml, appliesTo, lifeTime); return token; } /// <summary> /// Provides a token as an XML string. /// </summary> /// <param name="rstr"></param> /// <returns></returns> public static string GetSerializedTokenFromResponse(this RequestSecurityTokenResponse rstr) { var serializedRst = new WSFederationSerializer().GetResponseAsString(rstr, new WSTrustSerializationContext()); return serializedRst; } /// <summary> /// Turns the XML representation of the token back into a GenericXmlSecurityToken. /// </summary> /// <param name="tokenAsXmlString"></param> /// <param name="appliesTo"></param> /// <param name="lifetime"></param> /// <returns></returns> public static GenericXmlSecurityToken GetTokenFromSerializedToken(this string tokenAsXmlString, Uri appliesTo, Lifetime lifetime) { RequestSecurityTokenResponse rstr2 = new WSFederationSerializer().CreateResponse( new SignInResponseMessage(appliesTo, tokenAsXmlString), new WSTrustSerializationContext()); return new GenericXmlSecurityToken( rstr2.RequestedSecurityToken.SecurityTokenXml, new BinarySecretSecurityToken( rstr2.RequestedProofToken.ProtectedKey.GetKeyBytes()), lifetime.Created.HasValue ? lifetime.Created.Value : DateTime.MinValue, lifetime.Expires.HasValue ? lifetime.Expires.Value : DateTime.MaxValue, rstr2.RequestedAttachedReference, rstr2.RequestedUnattachedReference, null); } }

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 3: Key Concerns

    - by Tim Murphy
    This is part 3 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Keys Concerns Of Smartphones In The Enterprise These are the factors that you need to be aware of and address in order to build successful enterprise smartphone applications.  Most of them have nothing to do with the application itself as you will see here. Managing Devices Managing devices is a factor that is going to effect how much your company will have to spend outside of developing the applications.  How will you track the devices within the corporation?  How often will you have to replace phones and as a consequence have to upgrade your applications to support new phones?  The devices can represent a significant investment of capital.  If these questions are not addressed you will find a number of hidden costs throughout the life of your solution. Purchase or BYOD We have seen the trend of Bring Your Own Device (BYOD) lately within the enterprise.  How many meetings have you been in where someone is on their personal iPad, iPhone, Android phone or Windows Phone?  The issue is if you can afford to support everyone's choice in device? That is a lot to take on even if you only support the current release of each platform. Do you go with the most popular device or do you pick a platform that best matches your current ecosystem and distribute company owned devices?  There is no easy answer here, but you should be able give some dollar value to both hardware and development costs related to platform coverage. Asset Tracking/Insurance Smartphones are devices that are easier to lose or have stolen than laptops and desktops. Not only do you have your normal asset management concerns but also assignment of financial responsibility. You also will need to insure them against damage and theft and add legal documents that spell out the responsibilities of the employees that use these devices. Personal vs. Corporate Data What happens when you terminate an employee?  How do you recover the device?  What happens when they have put personal data on the device?  These are all situation that can cause possible loss of corporate intellectual property or legal repercussions of reclaiming a device with personal data on it.  Policies need to be put in place that protect the company from being exposed to type of loss.  This can mean significant legal and procedural cost that you need to consider. Coming Up In the last installment of this series I will cover application development considerations. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Why is my collision detection not accurate?

    - by optimisez
    After trying and trying, I still cannot understand why the leg of character exceeds the wall but no clipping issue when I hit the wall from below. How should I fix it to make him stand still on the wall? From collideWithBox() function below, it shows that playerDest.Y = boxDest.Y - boxDest.height; will get the position the character should standstill on the wall. Theoretically, the clipping effect won't be happen as the character hit the box from below works with the equation playerDest.Y = boxDest.Y + boxDest.height;. void collideWithBox() { if ( spriteCollide(playerDest, boxDest) && keyArr[VK_UP]) //playerDest.Y += 50; playerDest.Y = boxDest.Y + boxDest.height; else if ( spriteCollide(playerDest, boxDest) && !keyArr[VK_UP]) playerDest.Y = boxDest.Y - boxDest.height; } void initPlayer() { // Create texture. hr = D3DXCreateTextureFromFileEx(d3dDevice, "player.png", 169, 44, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &player); playerRect.left = playerRect.top = 0; playerRect.right = 29; playerRect.bottom = 36; playerDest.X = 0; playerDest.Y = 564; playerDest.length = playerRect.right - playerRect.left; playerDest.height = playerRect.bottom - playerRect.top; } void initBox() { hr = D3DXCreateTextureFromFileEx(d3dDevice, "brock.png", 330, 132, D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT, D3DX_DEFAULT, D3DCOLOR_XRGB(255, 255, 255), NULL, NULL, &box); boxRect.left = 33; boxRect.top = 0; boxRect.right = 63; boxRect.bottom = 30; boxDest.X = boxDest.Y = 300; boxDest.length = boxRect.right - boxRect.left; boxDest.height = boxRect.bottom - boxRect.top; } bool spriteCollide(Entity player, Entity target) { float left1, left2; float right1, right2; float top1, top2; float bottom1, bottom2; left1 = player.X; left2 = target.X; right1 = player.X + player.length; right2 = target.X + target.length; top1 = player.Y; top2 = target.Y; bottom1 = player.Y + player.height; bottom2 = target.Y + target.height; if (bottom1 < top2) return false; if (top1 > bottom2) return false; if (right1 < left2) return false; if (left1 > right2) return false; return true; }

    Read the article

  • Is it reasonable to insist on reproducing every defect before diagnosing and fixing it?

    - by amphibient
    I work for a software product company. We have large enterprise customers who implement our product and we provide support to them. For example, if there is a defect, we provide patches, etc. In other words, It is a fairly typical setup. Recently, a ticket was issued and assigned to me regarding an exception that a customer found in a log file and that has to do with concurrent database access in a clustered implementation of our product. So the specific configuration of this customer may well be critical in the occurrence of this bug. All we got from the customer was their log file. The approach I proposed to my team was to attempt to reproduce the bug in a similar configuration setup as that of the customer and get a comparable log. However, they disagree with my approach saying that I should not need to reproduce the bug (as that is overly time-consuming and will require simulating a server cluster on VMs) and that I should simply "follow the code" to see where the thread- and/or transaction-unsafe code is and put the change working off of a simple local development, which is not a cluster implementation like the environment from which the occurrence of the bug originates. To me, working out of an abstract blueprint (program code) rather than a concrete, tangible, visible manifestation (runtime reproduction) seems like a difficult working environment (for a person of normal cognitive abilities and attention span), so I wanted to ask a general question: Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? Or: If I am a senior developer, should I be able to read (multithreaded) code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? In my opinion, any fix submitted in response to an incident ticket should be tested in an environment simulated to be as close to the original environment as possible. How else can you know that it will really remedy the issue? It is like releasing a new model of a vehicle without crash testing it with a dummy to demonstrate that the air bags indeed work. Last but not least, if you agree with me: How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof?

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • WiFi on Ubuntu 12.04 custom: downloading unbearably slow

    - by Mark
    iwconfig reports 11 Mbps, yet I've seen as low as <1 KBps. This is the latest in my laundry list of Ubuntu problems in a dual-boot machine (cyberpowerpc custom, intel i7-3820, nvidia gtx 570). I received it two days ago, Windows 7 running fine, still having problems with Ubuntu. The browsing is intermittent but unacceptable. e.g. I could get to this site last night but I couldn't post this question. The downloading is unbearably slow, I can't download anything or install any packages because the speed is so slow. e.g. I am trying to install vim which is inexplicably missing from my 12.04 install (add another one to the problems list) and my download speed reported in the terminal was 241 B/s. Yes, bytes. iwconfig reports 11 Mbps, which further adds to the confusion. User@ubuntu:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:39:76:2C:A1 Bit Rate=11 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=36/70 Signal level=-74 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:54 Invalid misc:18 Missed beacon:0 eth0 no wireless extensions. Any ideas? I see this is a problem a lot of people, but none of the on line solutions have worked for me so far. e.g. one site recommends editing the ath9k.conf file in /etc/modprobe.d, yet this file isn't even in the folder: User@ubuntu:/$ cd etc/modprobe.d User@ubuntu:/etc/modprobe.d$ ls alsa-base.conf blacklist-oss.conf blacklist-ath_pci.conf blacklist-rare-network.conf blacklist.conf blacklist-watchdog.conf blacklist-firewire.conf dkms.conf blacklist-framebuffer.conf nvidia-current_hybrid.conf blacklist-modem.conf nvidia-graphics-drivers.conf I think the nvidia gpu might be mucking things up. I had the "blinking cursor" problem when installing in the first place, and then I had the monitor out of range problem as well. I have my faithful Asus laptop, which is running Ubuntu 12.04 just fine. The only difference is executing host -t SOA local in the terminal gives User@ubuntu:~$ host -t SOA local local has SOA record local. nobody.localhost. 42 86400 43200 604800 10800 in my new machine, and the command reports Host local. not found in the laptop. Help would be most welcome, as I am in danger of reverting back to Windows. I'm seriously considering it. Sorry for the length, trying to show my effort in resolving the issue and include terminal snippets that might be helpful.

    Read the article

  • git commit –m “CodePlex now supports Git!”

    Finally, yes, CodePlex now supports Git! Git has been one of the top rated requests from the CodePlex community for some time: Admittedly, when we launched CodePlex, we never expected that at some point we would be running a source control system originally invented by Linus Torvalds to use for the Linux kernel. Though I would also say, nobody would have thought the open source ecosystem would be as important to Microsoft as it has become now. Giving CodePlex users what they ask for and supporting their open source efforts has always been important to us, and we have a long list of improvements planned, so stay tuned as we have more up our sleeves! Why Git? So why Git? CodePlex already has Mercurial for distributed version control and TFS (which also supports subversion clients) for centralized version control. The short answer is that the CodePlex community voted, loud and clear, that Git support was critical. Additionally, we just like it, we use Git on our team every day and making the DVCS workflows more available to the CodePlex community is just the right thing to do. Forks and Pull Requests One of the capabilities that distributed version control systems, such as Mercurial and Git, enable is the Fork and Pull Request workflow.  Just like with Mercurial, projects configured to use Git enable Forking the source and submitting contributions back via Pull Requests. The Fork/Pull Request workflow is a key accelerator to many open source projects and you will see improvements in our support coming later this year. More Choice With the addition of Git, now CodePlex has three options when it comes to Open Source project hosting. Projects can now select between TFS, Mercurial, and Git. Each developer has their own preferences, and for some, centralized version control makes more sense to them. For others, DVCS is the only way to go. We’re equally committed to supporting both these technologies for our users. You can get started today by creating a new project or contribute to an existing project by creating a fork. For help on getting started with Git on CodePlex, see our help documentation here. If you would like to switch your project to use Git, please contact us at CodePlex Support with your project information, and we will be happy to help you out. We're Listening CodePlex is your community, and we want to deliver the experiences you need to have a successful open source project. We want your ideas and feedback to make CodePlex a great development community.  The issue tracker on CodePlex is publicly available. Add suggestions or vote up existing suggestions. And you can always find us on Twitter, I’m @mgroves84; follow us to keep up to date with our latest releases: @codeplex

    Read the article

  • Using extension methods to decrease the surface area of a C# interface

    - by brian_ritchie
    An interface defines a contract to be implemented by one or more classes.  One of the keys to a well-designed interface is defining a very specific range of functionality. The profile of the interface should be limited to a single purpose & should have the minimum methods required to implement this functionality.  Keeping the interface tight will keep those implementing the interface from getting lazy & not implementing it properly.  I've seen too many overly broad interfaces that aren't fully implemented by developers.  Instead, they just throw a NotImplementedException for the method they didn't implement. One way to help with this issue, is by using extension methods to move overloaded method definitions outside of the interface. Consider the following example: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public interface IFileTransfer 2: { 3: void SendFile(Stream stream, Uri destination); 4: } 5:   6: public static class IFileTransferExtension 7: { 8: public static void SendFile(this IFileTransfer transfer, 9: string Filename, Uri destination) 10: { 11: using (var fs = File.OpenRead(Filename)) 12: { 13: transfer.SendFile(fs, destination); 14: } 15: } 16: } 17:   18: public static class TestIFileTransfer 19: { 20: static void Main() 21: { 22: IFileTransfer transfer = new FTPFileTransfer("user", "pass"); 23: transfer.SendFile(filename, new Uri("ftp://ftp.test.com")); 24: } 25: } In this example, you may have a number of overloads that uses different mechanisms for specifying the source file. The great part is, you don't need to implement these methods on each of your derived classes.  This gives you a better interface and better code reuse.

    Read the article

  • No Sound in 12.04 on Acer TimelineX 3830TG

    - by Fawkes5
    When i first installed ubuntu 12.04 everything worked fine. All of a sudden my sound has stopped working!! This has happened before [like 2 days ago], however the next day it started working again. That may happen again...[I've already rebooted 5 times] My Sound Card is a HDA intel PCH My Sound Chip is Intel CougarPoint HDMI My machine is an Acer TimelineX 3830TG, heres my lcpci output 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff) 02:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) 03:00.0 Network controller: Atheros Communications Inc. AR9287 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 05:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) I'm running a dual boot with Windows 7, sound always works there. I've tried loading earlier kernels [3.2.0 ] but that hasn't helped. I've tried playing with alsamixer, unmuting and raising volume, doesn't work I've tried playing with pauvcontrol, doesnt work. Althought, strangely, when i play a song, under output i can see a bar moving implying that something is working, just the sound isn't getting through my speakers. Thus it might be a hardware problem. I've tried unistalling and reinstalling pulsaudio annd alsamixer, doesn't work [maybe made it worse...]. I've heard reverting back to kernel 3.0... works, but i'm reluctant to try that[i dont know much about kernels]. Thank you. Please tell me what other information you need. EDIT: My audio has started working again after another reboot. However this time im not connected to my external audio on startup. I will see if this is the issue next time i'm at home Edit: Connecting to external audio is not the problem.

    Read the article

  • How do I prevent Ubuntu gradually slowing down to a stop?

    - by user29165
    I really do not know what is causing the problem of my desktop environment (D.E.) gradually slowing down. It seems to happen in Gnome 3.2, LXDE, XFCE, and KDE. I am running Ubuntu 11.10 and this problem started happening from a fresh install. I have noticed that if I restart Gnome or otherwise re-log-in with any of the D.E.s the speed resets to normal. I don't seem to have and memory leaks that account for it and there are not any processes that are using excessive CPU cycles. Due to the symptoms I am guessing that there is an underlying package that interacts with all D.E.s that is the source of the problem. Just to clarify, in extreme situations, if I let the slowdown continue without a restart of the D.E. my system will finally get to a point where everything, even my clock, freezes. The time over which Gnome slows down (noticeably after 17 hrs) is much less than LXDE, or XFCE but eventually they freeze as well. I didn't mention Unity since I haven't used it enough to verify the problem, but I don't see why it should be different. Just in case this is relevant, I Suspend my computer rather than turn it off. I do this because of the greatly reduced load time and the 17 hrs I stated above is actual up-time, not time where the D.E. is actually active. However, the slow down does not seem to be affected by how long the computer has been in suspend mode. I know that it is possible that this problem is due to an interaction between two or more of the applications that I use and as such it may not be able to be duplicated by others. In the end I am just wondering (a) are other people experiencing this issue and (b) does anyone have some advice on where to start looking for a solution if one is not already known. I am even open to the idea of switching to the beta release of 12.04 if anyone thinks it will solve the problem. Edit: I took a video of my latest freeze that you can watch at http://youtu.be/flKUqUzCmdE, sorry about the audio. Edit: Since that video I checked my memory for errors and tried installing the proprietary video drivers. No memory errors were found and the proprietary video drivers made the display unusable so I had to uninstall them. Any thoughts?

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • How bad does it look to have left a job soon after starting? [closed]

    - by unitedgremlin
    I have a job I would like to leave. On advice from friends and parents I have stayed. Their primary concern is that it would look bad on my resume if I left only after a few months of joining. My concerns with the job are as follows: When I started it was preferred I provide and use my own equipment. Could be out of business in a few months from lack of cash flow Poor code quality: memory leaks and lack of error handling. The same mistakes continue to be made even though I have raised the issue. It has become evident that co-workers do not understand memory management rules of the platform and are not interested in learning them. Yet, there is still surprise from them when strange bugs continue to crop up. As a result don't feel I will learn from co-workers. Plus, fixing the the lingering bugs and trying to keep up on new feature development is like a game of whack-o-mole that never ends. I don't believe in the companies vision or its ability to execute on the ideas anymore. My ideas and suggestions for very small tweaks are quickly dismissed. And so more than half or so have come back as bugs that we end up needing to address. I have been told to wait on fixing bugs in codes until we can talk to the original author. I don't feel I am allowed to take initiative to just fix/change things and do what I think is best. Everything needs consensus even for a bug fix before any work is done. I am adopting a shut-up and just do what I am told approach to save myself from ulcers. Lots of meetings (I am personally not involved in all of them which is good) but the sheer amount reminds me of days at a big corporation. Why is everyone around me always meeting? It's a small company. I can count everyone on my toes and fingers. I can say with certainty I have no interest in working with any of them again. This is the first time I have truly worked with a group of so called "B and C players". Ultimately, I think it is my fault for not doing a better job evaluating the team and company before joining. But I have generated a better set of questions when probing companies in the future. My questions are: How bad does it look to have left a job soon (few months) after starting? What would be the best way to explain my concerns and reasons for leaving without badmouthing the company? Should I stick it out to what I believe will be the soon end of the company?

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • To Obtain EPOCH Time Value from a Packed BIT Structure in C [migrated]

    - by xde0037
    This is not a home assignment! We have a binary data file which has following data structure: (It is a 12 byte structure): I need to find out Epoch time value(total time value is packed in 42 bits as described below): Field-1 : Byte 1, Byte 2, + 6 Bits from Byte 3 Time-1 : 2 Bits from Byte 3 + Byte 4 Time-2 : Byte 5, Byte 6, Byte 7, Byte 8 Field-2 : Byte 9, Byte 10, Byte 11, Byte 12 For Field-1 and Field-2 I do not have issue as they can be taken out easily. I need time value in Epoch Time (long) as it has been packed in Bytes 5,6,7,8 and 3 and 4 as follows: (the bit structure for the time value is as follows): Bytes 5 to 8 (32 bit word) Packs time value bits from 0 thru 31 (byte 5 has 0 to 7 bits, byte 6 has 8 to 15, byte 7 has 16 to 23, byte 8 has 24 to 31). the remaining 10 bits of time value are packed in Bytes 3 and byte 4 as follows: byte 3 has 2 bits:32 and 33, and Byte 4 has remaining bits : 34 to 41. So total bits for time value is 42 bits, packed as above. I need to compute epoch value coming out of these 42 bits. How do I do it? I have done something like this but not sure it gives me correct value: typedef struct P_HEADER { unsigned int tmuNumber : 21; unsigned int time1 : 10; // Bits 6,7 from Byte-3 + 8 bits from Byte-4 unsigned int time2 : 32; // 32 bits: Bytes 5,6,7,8 unsigned int traceKey : 32; } __attribute__((__packed__)) P_HEADER; Then in the code : P_HEADER *header1; //get input string in hexa,etc..etc.. //parse the input with the header as : header1 = (P_HEADER *)inputBuf; // then print the header1->time1, header1->time2 .... long ttime = header1->time1|header1->time2; //?? is this the way to get values out? Any hint tip will be appreciated. Environment is : gcc 4.1, Linux Thanks in advance.

    Read the article

  • Zelda-style top-down RPG. How to store tile and collision data?

    - by Delerat
    I'm looking to build a Zelda: LTTP style top-down RPG. I've read a lot on the subject and am currently going back and forth on a few solutions. I'm using C#, MonoGame, and Tiled. For my tile maps, these are the choices I can see in front of me: Store each tile as its own array. Each one having 3-4 layers, texture/animation, depth, flags, and maybe collision(depending on how I do it). I've read warning about memory issues going this route, and my biggest map will probably be 160x120 tiles. My average map however will be about 40x30. The number of tiles might cut in half if I decide to double my tile size, which is currently 16x16. This is the most appealing approach for me, as I feel like I would know how to save maps, make changes, and separate it into chunks for collision checks. Store the static parts of my tile map in multiple arrays acting as the different layers. Then I would just use entities for anything that wasn't static. All of the other tile data such as collisions, depth, etc., would be stored in their own layers as well I guess? This way just seems messy to me though. Regardless of which one I choose, I'm also unsure how to plan all of that other tile data. I could write a bunch of code that would know which integer represents what tile and it's data, but if I changed a tileset in Tiled and exported it again, all of those integers could potentially change and I'd have to adjust a whole bunch of code. My other issue is about how I could do collision. I want to at least support angled collision that slides you around the corners of objects like LTTP does, if not more oddball shapes as well. So do I: Store collision as a flag for binary collision. Could I get this to support angles? Would it be fine to store collision as an integer and have each number represent a certain angle of collision? Store a list of rectangles or other shapes and do collision that way? Sorry for the large two-part(three-part?) question. I felt like these needed to be asked together as I believe each choice influences the other.

    Read the article

  • External 1TB WD USB 3.0 HDD is not detecting. Working perfectly find in Windows

    - by Yathi
    My 1TB USB 3.0 was working fine earlier in Ubuntu as well as Windows. But lately it is not at all being detected in Ubuntu. It still works fine in Windows. I did update my Ubuntu to 12.10 but I am not sure if that caused the issue. When I connect my HDD and run dmesg | tail: [ 47.804676] usb 4-3: >Device not responding to set address. [ 48.008575] usb 4-3: >Device not responding to set address. [ 48.212421] usb 4-3: >device not accepting address 9, error -71 [ 48.324451] usb 4-3: >Device not responding to set address. [ 48.528340] usb 4-3: >Device not responding to set address. [ 48.732165] usb 4-3: >device not accepting address 10, error -71 [ 48.844138] usb 4-3: >Device not responding to set address. [ 49.048179] usb 4-3: >Device not responding to set address. [ 49.251881] usb 4-3: >device not accepting address 11, error -71 [ 49.251907] hub 4-0:1.0: >unable to enumerate USB device on port 3 The output of sudo fdisk -l is : Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030cde Device Boot Start End Blocks Id System /dev/sda1 2048 1332981759 666489856 83 Linux /dev/sda2 1332981760 1953523711 310270976 5 Extended /dev/sda5 1332983808 1349365759 8190976 82 Linux swap / Solaris /dev/sda6 1349367808 1953523711 302077952 7 HPFS/NTFS/exFAT Disk /dev/sdb: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a2519 Device Boot Start End Blocks Id System /dev/sdb1 * 2048 103368703 51683328 7 HPFS/NTFS/exFAT /dev/sdb2 103368704 154568703 25600000 83 Linux /dev/sdb3 154568704 234440703 39936000 7 HPFS/NTFS/exFAT /dev/sda and /dev/sdb are my 2 internal HDDs. But the external one which should be /dev/sdc is not even being shown though it is connected and the LED on the HDD is glowing. Someone had suggested adding blacklist uas to /etc/modprobe.d/blacklist.conf. Tried that as well. But still not working. Can someone help me out.

    Read the article

  • JQuery Validation [migrated]

    - by user41354
    Im trying to get my form to validate...so basically its working, but a little bit too well, I have two text boxes, one is a start date, the other an end date in the format of mm/dd/yyyy if the start date is greater than the end date...there is an error if the end date is less than the start date...there is an error if the start date is less than today's date...there is an error The only thing is when I correct the error, the error warning is still there...here is my code: dates.change(function () { var testDate = $(this).val(); var otherDate = dates.not(this).val(); var now = new Date(); now.setHours(0, 0, 0, 0); // Pass Dates if (testDate != '' && new Date(testDate) < now) { addError($(this)); $('.flightDateError').text('* Dates cannot be earlier than today.'); isValid = false; return; } // Required Text if ($(this).hasClass("FromCal") && testDate == '') { addError($(this)); $('.flightDateError').text('* Required'); isValid = false; return; } // Validate Date if (!isValidDate(testDate)) { // $(this).addClass('validation_error_input'); addError($(this)); $('.flightDateError').text('* Invalid Date'); isValid = false; return; } else { // $(this).removeClass('validation_error_input'); removeError($(this)); if (!dates.not(this).hasClass('validation_error_input')) $('.flightDateError').text(' '); } // Validate Date Ranges if ($(this).val() != '' && dates.not(this).val != '') { if ($(this).hasClass("FromCal")) { if (new Date(testDate) > new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* Start date must be earlier than end date.'); isValid = false; return; } } else{ if (new Date(testDate) < new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* End date must be later than start date.'); return; } } } }); The main Issue is this part, I believe // Validate Date Ranges if ($(this).val() != '' && dates.not(this).val != '') { if ($(this).hasClass("FromCal")) { if (new Date(testDate) > new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* Start date must be earlier than end date.'); isValid = false; return; } } else{ if (new Date(testDate) < new Date(otherDate)) { addError($(this)); $('.flightDateError').text('* End date must be later than start date.'); return; } } } testDate is the start date otherDate is the end date Thanks in advanced, J

    Read the article

  • Windows Backup failed with error 0x807800C5

    - by Alexey Ivanov
    I setup backup of my Windows 8 laptop with Windows 7 File Recovery (known as Backup and Restore in Windows 7). Backup of files runs successfully. But if I try to create system image, it fails with error 0x807800C5: Error details on the dialog: The mounted backup volume is inaccessible. Error details in the system log: There was a failure in preparing the backup image of one of the volumes in the backup set. I save the backup to a network location, WD MyBookLive.   Edit: I tried some of the steps suggested in the various thread about this issue: Cleaned up the backup location: Removed MediaID.bin in the backup location. Removed folder <ComputerName> from WindowsImageBackup. Restarted backup resulted in the same error. However, the error dialog shows slightly different error message: The specified backup disk cannot be found. Performed System File Check by running sfc /scannow. It showed no errors. Running backup failed nevertheless. I tried searching Google for error code but I've found no solution so far. Update: I submitted technical support request to Microsoft. The first suggestion was to clean boot, but it didn't help. I pointed out that I had tried all the methods from the same problem on MS Answers, and nothing had helped.

    Read the article

  • How to use tscon on Windows7?

    - by Radek
    I need to run overnight automation testing using RFT and IE on Windows7 virtual machine. I found that restarting the Windows box before the testing starts helps. I am moving the production environment from Windows XP to Windows 7. RFT used to complain when running RFT scripts that CRFCN0557E: Activation failed when running under a Terminal Services environment. This may be caused by using a minimized terminal window - try playing back without minimizing the terminal window (it does not need to be full-screen). Running tscon.exe 0 /dest:console prior starting any RFT script fix the error on Windows XP. But not on Windows7. I did some research and was trying for hours to fix that but nothing helped. There is no screen saver turned on on Windows7. I tried to run both but nothing helped. tscon.exe 0 /dest:console tscon.exe 1 /dest:console On Windows7 tscon returns {ErrorPrintf(): LoadString failed, Error 15105, (0x00003B01)} Error [15105]:The resource loader cache doesn't have loaded MUI entry. Error [0]:The operation completed successfully. On Windows XP tscon returns Could not connect sessionID 0 to sessionname console, Error code 7045 Error [7045]:The requested session access is denied. I just double checked that running tscon.exe 0 /dest:console on Windows XP solves the issue. Cannot understand the output of the tscon command then. Any idea how I can run RFT scripts after I restart the Windows box automatically? Preferably without involving any other computer. I was even thinking to use the old Windows XP to make remote desktop session to make RFT happy. I hope there is other better solution to that.

    Read the article

< Previous Page | 672 673 674 675 676 677 678 679 680 681 682 683  | Next Page >