Search Results

Search found 13586 results on 544 pages for 'trusted domain'.

Page 462/544 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • Throwing exception from a property when my object state is invalid

    - by Rumi P.
    Microsoft guidelines say: "Avoid throwing exceptions from property getters", and I normally follow that. But my application uses Linq2SQL, and there is the case where my object can be in invalid state because somebody or something wrote nonsense into the database. Consider this toy example: [Table(Name="Rectangle")] public class Rectangle { [Column(Name="ID", IsPrimaryKey = true, IsDbGenerated = true)] public int ID {get; set;} [Column(Name="firstSide")] public double firstSide {get; set;} [Column(Name="secondSide")] public double secondSide {get; set;} public double sideRatio { get { return firstSide/secondSide; } } } Here, I could write code which ensures that my application never writes a Rectangle with a zero-length side into the database. But no matter how bulletproof I make my own code, somebody could open the database with a different application and create an invalid Rectangle, especially one with a 0 for secondSide. (For this example, please forget that it is possible to design the database in a way such that writing a side length of zero into the rectangle table is impossible; my domain model is very complex and there are constraints on model state which cannot be expressed in a relational database). So, the solution I am gravitating to is to change the getter to: get { if(firstSide > 0 && secondSide > 0) return firstSide/secondSide; else throw new System.InvalidOperationException("All rectangle sides should have a positive length"); } The reasoning behind not throwing exceptions from properties is that programmers should be able to use them without having to make precautions about catching and handling them them. But in this case, I think that it is OK to continue to use this property without such precautions: if the exception is thrown because my application wrote a non-zero rectangle side into the database, then this is a serious bug. It cannot and shouldn't be handled in the application, but there should be code which prevents it. It is good that the exception is visibly thrown, because that way the bug is caught. if the exception is thrown because a different application changed the data in the database, then handling it is outside of the scope of my application. So I can't do anything about it if I catch it. Is this a good enough reasoning to get over the "avoid" part of the guideline and throw the exception? Or should I turn it into a method after all? Note that in the real code, the properties which can have an invalid state feel less like the result of a calculation, so they are "natural" properties, not methods.

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Phone number in meta description bad or good for local rankings NAP

    - by bybe
    Once again I'm at it with increasing people's local rankings and I've learnt so much about local rankings in the past 2 weeks it feels like my brain is gonna pop anyway, question is fairly simple for someone who engages in local rankings and I appreciate the question may be a little guess work but isn't SEO mostly guessing anyway? From what I've read and learned that Google works of a system called nap for local rankings (With many other factors but this question is purely based on NAP). For people who care about local rankings NAP stands for Name of Business / Address of Business / Phone Number for Business. Now what what I've read you don't need the whole NAP to be on one website, a P or just a N can help towards your local rankings. It's believed that NAP rewards more than just P and N for example but knowing Google they might have a diversity checker which is my concern what your get to in a moment. Now of course sites weight differently where your business is posted, it's certainly going to be more credible if your NAP details are on your national phone book than say a blog site, so taking in this consideration too. Pure Guess (Not apart of the question but none the less makes a good read on my belief). Now my guess work would make me believe that the formula would look something like (N)+(A)+(P)x(T) So (N)name would be 1 or 0 to indicate present or not So (A)dress would be 1 or 0 to indicate present or not So (P)hone would be 1 or 0 to indicate present or not So (T)rust would be 1-100 to indicate level of trust So a phone number appearing on youtube might look something like 0+0+1x95= 95 and a NAP appearing on your national phone book might look something like 1+1+1x100= 300 Please note that I'm not saying this is the sole factor and I'm sure its way more complex that this with things like other factors on the page, off the page (Reviews, Links, Clicks) and so on but its still a contributor). The Question My question is fairly simple and I'd imagine hard to impossible to have an actual definite answer to this but maybe someone has seen official wording else where on this, is it bad to include address or phone number in the Meta Description? The reason I ask is that one of my competitors has these elements in the meta descriptions and their local rankings are absolutely superb, the problem I have with this is scrap bot sites like 'Similar Too' 'Seo Rankings' and 1,000's of the other scrap box networks that scrap site and then make urls with your site information are mostly limited to your meta description what this means that your phone number, address and sometimes even your company name if the domain is exact will appear as AP, and even NAP on thousands of websites. So, is it a bad strategy to include phone number and address in meta description, everything I read into would suggest its good of course with the downside of maybe lowering quality of description for click thoughts but top rankings would increase this 10 folds anyhow..

    Read the article

  • Bridging 10GbE with 12.04 - bridging works but the bridging computer has no internet access

    - by Donal
    I have been trying to get 12.04 bridging working with two 10GbE cards. I have 2 10GbE cards in a linux box being used only for this bridge, 1 with 2 10GbaseT ports and another with a single CX4 port. I have 2 client computers connected with 10GbaseT cards and the CX4 card connects to a procurve switch. I can get the bridging happening mostly the way that I want, The clients receive dhcp information from the dhcp server (not the bridging machine) and can connect to and properly see the rest of the network. Speeds are ok, not amazing but working on that is another matter. My problem is that the bridging machine has no internet access ... meaning I can't update anything or apt-get anything It can ping all other machines on the local network. I've tried the helpful hints from: https://help.ubuntu.com/community/NetworkConnectionBridge "Enabling Internet Use on the Bridging Computer" and get the following RTNETLINK answers: File exists but dhclient br0 does nothing for me :( I think if it is anything it a multiple route problem as both br0 and eth4 have ipaddresses ... even though I have only set it up so that br0 has one ... Bridge setup details: /etc/network/interface auto br0 iface br0 inet static address 192.168.0.246 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 dns-nameservers 192.168.0.1 dns-search example.com dns-domain example.com #(eth2 & eth3 are the 10GbaseT) #(eth4 is the CX4 connection) pre-up ip link set eth2 down pre-up ip link set eth3 down pre-up ip link set eth4 down pre-up brctl addbr br0 pre-up brctl addif br0 eth4 eth3 eth2 pre-up ip addr flush dev eth3 pre-up ip addr flush dev eth2 pre-up ip addr flush dev eth4 post-down ip link set eth4 down post-down ip link set eth2 down post-down ip link set eth3 down post-down ip link set br0 down post-down brctl delif br0 eth2 eth3 eth4 post-down brctl delbr br0 ifconfig -a br0 Link encap:Ethernet HWaddr 00:15:17:22:20:34 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe22:2034/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4957 errors:0 dropped:0 overruns:0 frame:0 TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:596320 (596.3 KB) TX bytes:139952 (139.9 KB) eth4 Link encap:Ethernet HWaddr 00:60:dd:47:7c:05 inet addr:192.168.0.57 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::260:ddff:fe47:7c05/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:15391 errors:0 dropped:51 overruns:0 frame:0 TX packets:1207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5916769 (5.9 MB) TX bytes:154312 (154.3 KB) Interrupt:70 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth4 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth4

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • A Bite With No Teeth&ndash;Demystifying Non-Compete Clauses

    - by D'Arcy Lussier
    *DISCLAIMER: I am not a lawyer and this post in no way should be considered legal advice. I’m also in Canada, so references made are to Canadian court cases. I received a signed letter the other day, a reminder from my previous employer about some clauses associated with my employment and entry into an employee stock purchase program. So since this is in effect for the next 12 months, I guess I’m not starting that new job tomorrow. I’m kidding of course. How outrageous, how presumptuous, pompous, and arrogant that a company – any company – would actually place these conditions upon an employee. And yet, this is not uncommon. Especially in the IT industry, we see time and again similar wording in our employment agreements. But…are these legal? Is there any teeth behind the threat of the bite? Luckily, the answer seems to be ‘No’. I want to highlight two cases that support this. The first is Lyons v. Multari. In a nutshell, Dentist hires younger Dentist to be an associate. In their short, handwritten agreement, a non-compete clause was written stating “Protective Covenant. 3 yrs. – 5mi” (meaning you can’t set up shop within 5 miles for 3 years). Well, the young dentist left and did start an oral surgery office within 5 miles and within 3 years. Off to court they go! The initial judge sided with the older dentist, but on appeal it was overturned. Feel free to read the transcript of the decision here, but let me highlight one portion from section [19]: The general rule in most common law jurisdictions is that non-competition clauses in employment contracts are void. The sections following [19] explain further, and discuss Elsley v. J.G. Collins Insurance Agency Ltd. and its impact on Canadian law in this regard. The second case is Winnipeg Livestock Sales Ltd. v. Plewman. Desmond Plewman is an auctioneer, and worked at Winnipeg Livestock Sales. Part of his employment agreement was that he could not work for a competitor for 18 months if he left the company. Well, he left, and took up an important role in a competing company. The case went to court and as with Lyons v. Multari, the initial judge found in favour of the plaintiffs. Also as in the first case, that was overturned on appeal. Again, read through the transcript of the decision, but consider section [28]: In other words, even though Plewman has a great deal of skill as an auctioneer, Winnipeg Livestock has no proprietary interest in his professional skill and experience, even if they were acquired during his time working for Winnipeg Livestock.  Thus, Winnipeg Livestock has the burden of establishing that it has a legitimate proprietary interest requiring protection.  On this key question there is little evidence before the Court.  The record discloses that part of Plewman’s job was to “mingle with the … crowd” and to telephone customers and prospective customers about future prospects for the sale of livestock.  It may seem reasonable to assume that Winnipeg Livestock has a legitimate proprietary interest in its customer connections; but there is no evidence to indicate that there is any significant degree of “customer loyalty” in the business, as opposed to customers making choices based on other considerations such as cost, availability and the like. So are there any incidents where a non-compete can actually be valid? Yes, and these are considered “exceptional” cases, meaning that the situation meets certain circumstances. Michael Carabash has a great blog series discussing the above mentioned cases as well as the difference between a non-compete and non-solicit agreement. He talks about the exceptional criteria: In summary, the authorities reveal that the following circumstances will generally be relevant in determining whether a case is an “exceptional” one so that a general non-competition clause will be found to be reasonable: - The length of service with the employer. - The amount of personal service to clients. - Whether the employee dealt with clients exclusively, or on a sustained or     recurring basis. - Whether the knowledge about the client which the employee gained was of a   confidential nature, or involved an intimate knowledge of the client’s   particular needs, preferences or idiosyncrasies. - Whether the nature of the employee’s work meant that the employee had   influence over clients in the sense that the clients relied upon the employee’s   advice, or trusted the employee. - If competition by the employee has already occurred, whether there is   evidence that clients have switched their custom to him, especially without   direct solicitation. - The nature of the business with respect to whether personal knowledge of   the clients’ confidential matters is required. - The nature of the business with respect to the strength of customer loyalty,   how clients are “won” and kept, and whether the clientele is a recurring one. - The community involved and whether there were clientele yet to be exploited   by anyone. I close this blog post with a final quote, one from Zvulony & Co’s blog post on this subject. Again, all of this is not official legal advice, but I think we can see what all these sources are pointing towards. To answer my earlier question, there’s no teeth behind the threat of the bite. In light of this list, and the decisions in Lyons and Orlan, it is reasonably certain that in most employment situations a non-competition clause will be ineffective in protecting an employer from a departing employee who wishes to compete in the same business. The Courts have been relatively consistent in their position that if a non-solicitation clause can protect an employer’s interests, then a non-competition clause is probably unreasonable. Employers (or their solicitors) should avoid the inclination to draft restrictive covenants in broad, catch-all language. Or in other words, when drafting a restrictive covenant – take only what you need! D

    Read the article

  • iwlwifi on lenovo z570 disabled by hardware switch

    - by Kevin Gallagher
    It was working fine with windows 7. The hardware switch is not disabled. I've toggled it back and forth dozens of times. The wifi light never turns on and it always lists as hardware disabled. I have the latest updates installed. I've been searching for solutions, but none of them seem to work for me. I've tried removing acer-wmi. I've tried setting 11n_disable=1. I've tried resetting the bios. I've tried using rfkill to unblock (only removes soft block). I've rebooted dozens of times. The wifi light turns off as soon as grub loads. Edit: I have a usb edimax wireless nic. It shows hardware disabled as well (although rfkill lists as unblocked). If I unload iwlwifi the usb nic works fine. uname -a `Linux xxx-Ideapad-Z570 3.2.0-55-generic #85-Ubuntu SMP Wed Oct 2 12:29:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linu`x rfkill list 19: phy18: Wireless LAN Soft blocked: no Hard blocked: yes dmesg [43463.022996] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [43463.023002] Copyright(c) 2003-2011 Intel Corporation [43463.023107] iwlwifi 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [43463.023190] iwlwifi 0000:03:00.0: setting latency timer to 64 [43463.023253] iwlwifi 0000:03:00.0: pci_resource_len = 0x00002000 [43463.023257] iwlwifi 0000:03:00.0: pci_resource_base = ffffc900057c8000 [43463.023261] iwlwifi 0000:03:00.0: HW Revision ID = 0x0 [43463.023797] iwlwifi 0000:03:00.0: irq 43 for MSI/MSI-X [43463.024013] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [43463.024250] iwlwifi 0000:03:00.0: L1 Enabled; Disabling L0S [43463.045496] iwlwifi 0000:03:00.0: device EEPROM VER=0x15d, CALIB=0x6 [43463.045501] iwlwifi 0000:03:00.0: Device SKU: 0X50 [43463.045504] iwlwifi 0000:03:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [43463.045542] iwlwifi 0000:03:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [43463.045744] iwlwifi 0000:03:00.0: RF_KILL bit toggled to disable radio. [43463.047652] iwlwifi 0000:03:00.0: loaded firmware version 39.31.5.1 build 35138 [43463.047823] Registered led device: phy18-led [43463.047895] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [43463.048037] ieee80211 phy18: Selected rate control algorithm 'iwl-agn-rs' [43463.055533] ADDRCONF(NETDEV_UP): wlan0: link is not ready nm-tool State: connected (global) - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: unavailable Default: no HW Address: 74:E5:0B:4A:9F:C2 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points lshw -C network *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 74:e5:0b:4a:9f:c2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-55-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:43 memory:f1500000-f1501fff lspci 03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak]

    Read the article

  • The Buzz at the JavaOne Bookstore

    - by Janice J. Heiss
    I found my way to the JavaOne bookstore, a hub of activity. Who says brick and mortar bookstores are dead? I asked what was hot and got two answers: Hadoop in Practice by Alex Holmes was doing well. And Scala for the Impatient by noted Java Champion Cay Horstmann also seemed to be a fast seller. Hadoop in PracticeHadoop is a framework that organizes large clusters of computers around a problem. It is touted as especially effective for large amounts of data, and is use such companies as  Facebook, Yahoo, Apple, eBay and LinkedIn. Hadoop in Practice collects nearly 100 Hadoop examples and presents them in a problem/solution format with step by step explanations of solutions and designs. It’s very much a participatory book intended to make developers more at home with Hadoop.The author, Alex Holmes, is a senior software engineer with more than 15 years of experience developing large-scale distributed Java systems. For the last four years, he has gained expertise in Hadoop solving Big Data problems across a number of projects. He has presented at JavaOne and Jazoon and is currently a technical lead at VeriSign.At this year’s JavaOne, he is presenting a session with VeriSign colleague, Karthik Shyamsunder called “Java: A Perfect Platform for Data Science” where they will explain how the Java platform has emerged as a perfect platform for practicing data science, and also talk about such technologies as Hadoop, Hive, Pig, HBase, Cassandra, and Mahout. Scala for the ImpatientSan Jose State University computer science professor and Java Champion Cay Horstmann is the principal author of the highly regarded Core Java. Scala for the Impatient is a basic, practical introduction to Scala for experienced programmers. Horstmann has a presentation summarizing the themes of his book on at his website. On the final page he offers an enticing summary of his conclusions:* Widespread dissatisfaction with Java + XML + IDEs               --Don't make me eat Elephant again * A separate language for every problem domain is not efficient               --It takes time to master the idioms* ”JavaScript Everywhere” isn't going to scale* Trend is towards languages with more expressive power, less boilerplate* Will Scala be the “one ring to rule them”?* Maybe              --If it succeeds in industry             --If student-friendly subsets and tools are created The popularity of both books echoed comments by IBM Distinguished Engineer Jason McGee who closed his part of the Sunday JavaOne keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers increasingly must know the strengths and weaknesses of such languages going forward.

    Read the article

  • Advice for how to handle company pride

    - by user17971
    We have this "amazing" little product using the latest development methodologies, components with all the bells and whistles. I took over this product maybe 6 months ago and struggled with it from day one. Even though it is supposedly is state of the art because of all its amazing structure, using dependency injections, inversion of control from the unity framework, hibernation and is domain driven in a .net mvvm xaml application to make it streamlined and modular. I knew from the moment I saw the monolith that it was going to be an uphill struggle for me. A lot of little code-bits scattered all around in neatly organized paradigms. Debugging is difficult, tracing the code is difficult, making new code is difficult, although some modifications is surprisinly easy but it doesn't out weight the problems I have with the code by a long shot. When I took over the project I was told that the new management console was ready for delivery and all I had to do was compile it and drop it. This was the beginning of a uphill struggle, our customer didn't agree at all that this was the functionality they had asked for so I had to do modifications to the program to their specifications. Since the project pretty much has been overdue since I took over it it has always been important that we didn't add or change much to the original system. I could modify the existing bits. fast forward until today where I finally completed all their comments and issues with the program but now I think that the users has opened their eyes (even though they saw this program many times) that they will be going backwards with this new system, that it will be much worse than the tool they got today (for a long time due to the fact that I'm the only resource on the project, project manager, tester, developer, integration specialist etc) My problem is that I lost faith in this system quite early due to the nature of the program. Although I made many changes and improvements to the system I wholeheartedly sympathize with the poor users who are going to start using this system. Its not nearly doing all the things it should do. I had this conversation internally with my boss where I told him what I thought about it, that if I were the customer I wouldn't have spent money developing it. So what do I do now? The system in ready, on a staging system and nobody likes it, its too slow and boring and does maybe do 50% of what they need it to do. Despite how much energy and working around the clock I've done to this project: I won't mind scrapping the system but we've spent much money (well my salaries) developing it and my company wants us to be proud of everything we do and advocate it. How will I tackle the contractor when he asks for advice? Surely I can tell him, this is what we agreed upon based on your use case scenarios, and be done with it? How will I inform my boss about this progress? He knows what I feel about it but I always get the feeling he let my criticism pass him by as just hot air, gone tomorrow,.

    Read the article

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • How to interview a natural scientist for a dev position?

    - by Silas
    I already did some interviews for my company, mostly computer scientists for dev positions but also some testers and project managers. Now I have to fill a vacancy in our research group within the R&D department (side note: “research” means that we try to solve problems in our professional domain/market niche using software in research projects together with universities, other companies, research centres and end user organisations. It’s not computer science research; we’re not going to solve the P=NP problem). Now we invited a guy holding an MSc in chemistry (with a lot of physics in his CV, too), who never had any computer science lesson. I already talked with him about half an hour at a local university’s career days and there’s no doubt the guy is smart. Also his marks are excellent and he graduated with distinction. For his BSc he needed to teach himself programming in Mathematica and told me believably that he liked programming a lot. Also he solved some physical chemistry problem that I probably don’t understand using his own software, implemented in Mathematica, for his MSc thesis. It includes a GUI and a notable size of 8,000 LoC. He seems to be very attracted by what we’re doing in our research group and to be honest it’s quite difficult for an SME like us to get good people. I also am very interested in hiring him since he could assist me in writing project proposals, reports, doing presentations and so on. He would probably fit to our team, too. The only question left is: How can I check if he will get the programming skills he needs to do software implementation in our projects since this will be a significant part of the job? Of course I will ask him what it is, that is fascinating him about programming. I’ll also ask how he proceeded to write his natural science software and how he structured it. I’ll ask about how he managed to obtain the skills and information about software development he needed. But is there something more I could ask? Something more concrete perhaps? Should I ask him to explain his Mathematica solution? To be clear: I’m not looking for knowledge in a particular language or technology stack. We’re a .NET shop in product development but I want to have a free choice for our research projects. So I’m interested in the meta-competence being able to learn whatever is actually needed. I hope this question is answerable and not open-ended since I really like to know if there is a default way to check for the ability to get further programming skills on the job. If something is not clear to you please give me some comments and let me improve my question.

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • How quickly to leave contract-to-hire gig where you don't want to be hired? [closed]

    - by nono
    So you move to a big new city with tons of software development opportunity, having taken a six month contract-to-hire job. The company treats you really well and has a good team and work environment. However, the recruiter assured you when offering the gig that it would be a good position in which you can advance your learning from more senior developers (a primary concern of yours) but you're starting to realize that a job recruiter isn't going to understand that the team in question isn't very up on modern software practices (you start to sympathize with this guy and read his post over and over again: http://stackoverflow.com/questions/1586166/career-killer-nhibernate-oop-design-patterns-domain-driven-design-test-driv) and that much of the company's software is very old and very very poorly architected, and the company (like so many others) seems to be only concerned with continually extending the software without investing in any structural improvements. You're absolutely dismayed at how long it takes your team (including) to fulfill simple feature requests (maybe 500-1000% longer than with better designed software that you've worked on in the past), but no one else there seems to think anything of it. You find that the work and the company's business are intensely uninteresting to you, but due to the convoluted design of their various software systems, fulfilling the work will require as much mental engagement as any other development position. You feel a bit naive about not having asked the right questions during your interview process, and for not having anticipated that your team at your former podunk company might possibly be light-years ahead of any team in Big Shiny City, but you know you don't want to stay at this place, and (were it not for your personal, after-hours studying and personal programming efforts) fear that you might actually give a worse interview after completing your 6 months than you did when you started at the place. You read about how hard of a time local companies are having filling their positions with qualified software development candidates. You read all sorts of fabulous sounding job postings online and feel like you're really missing out. In spite of the comfortable environment you feel like you would willingly accept a somewhat more demanding or aggressive lifestyle to feel like you are learning and progressing and producing something meaningful. My questions are: how quickly do you leave and how do you go about giving a polite reason for departing? The contract is written to allow them to "can" you and to allow you to leave with 2 weeks notice. Do you ethically owe the 6 months? Upon taking the position, the company told you they were not interested in candidates who were intending to only stay for 6 months and then leave (you were not intending to bail after 6 months, at that time), so perhaps they might be fine if you split now, knowing that you don't want to stick around for the full time hire?

    Read the article

  • Upgrading Oracle Enterprise Manager: 12c to 12c Release 2

    - by jorge_neidisch
    1 - Download the OEM 12c R2. It can be downloaded here: http://www.oracle.com/technetwork/oem/grid-control/downloads/index.html  Note: it is a set of three huge zips. 2 - Unzip the archives 3 - Create a directory (as the oem-owner user) where the upgraded Middleware should be installed. For instance: $ mkdir /u01/app/oracle/Middleware12cR2 4 - Back up OMS (Middleware home and inventory), Management Repository and Software Library. http://docs.oracle.com/cd/E24628_01/doc.121/e24473/ha_backup_recover.htm#EMADM10740 5 - Ensure that the Management tables don't have snapshots:  SQL> select master , log_table from all_mview_logs where log_owner='<EM_REPOS_USER>If there are snapshots drop them:  SQL> Drop snapshot log on <master> 6 - Copy emkey  from existing OMS:  $ <OMS_HOME>/bin/emctl config emkey -copy_to_repos [-sysman_pwd <sysman_pwd>]To verify whether the emkey is copied, run the following command: $ <OMS_HOME>/bin/emctl status emkeyIf the emkey is copied, then you will see the following message:The EMKey  is configured properly, but is not secure.Secure the EMKey by running "emctl config emkey -remove_from_repos". 7 - Stop the OMS and the Agent $ <OMS_HOME>/bin/emctl stop oms $ <AGENT_HOME>/bin/emctl stop agent 8 - from the unzipped directory, run $ ./runInstaller 8a - Follow the wizard: Email / MOS; Software Updates: disable or leave empty. 8b - Follow the wizard:  Installation type: Upgrade -> One System Upgrade. 8c - Installation Details: Middleware home location: enter the directory created in step 3. 8d - Enter the DB Connections Details. Credentials for SYS and SYSMAN. 8e - Dialog comes: Stop the Job Gathering: click 'Yes'. 8f- Warning comes: click 'OK'. 8g - Select the plugins to deploy along with the upgrade process 8h- Extend Weblogic: enter the password (recommended, the same password for the SYSMAN user). A new directory will be created, recommended: /u01/app/oracle/Middleware12cR2/gc_inst 8i - Let the upgrade proceed by clicking 'Install'. 8j - Run the following script (as root) and finish the 'installation':  $ /u01/app/oracle/Middleware12R2/oms/allroot.sh 9 - Turn on the Agent:  $ <AGENT_HOME>/bin/emctl start agent  Note that the $AGENT_HOME might be located in the old Middleware directory:  $ /u01/app/oracle/Middleware/agent/agent_inst/bin/emctl start agent 10 - go to the EM UI. Select the WebLogic Target and choose the option "Refresh WebLogic Domain" from the menu. 11 - Update the Agents: Setup -> Manage Cloud Control -> Upgrade Agents -> Add (+) Note that the agents may take long to show up. ... and that's it! Or that should be it !

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • JGridView

    - by Geertjan
    JGrid was announced last week so I wanted to integrate it into a NetBeans Platform app. I.e., I'd like to use Nodes instead of the DefaultListModel that is supported natively, so that I can integrate with the Properties Window, for example: Here's how: import de.jgrid.JGrid; import java.beans.PropertyVetoException; import javax.swing.DefaultListModel; import javax.swing.JScrollPane; import javax.swing.event.ListSelectionEvent; import javax.swing.event.ListSelectionListener; import org.book.domain.Book; import org.openide.explorer.ExplorerManager; import org.openide.nodes.Node; import org.openide.util.Exceptions; public class JGridView extends JScrollPane { @Override public void addNotify() { super.addNotify(); final ExplorerManager em = ExplorerManager.find(this); if (em != null) { final JGrid grid = new JGrid(); Node root = em.getRootContext(); final Node[] nodes = root.getChildren().getNodes(); final Book[] books = new Book[nodes.length]; for (int i = 0; i < nodes.length; i++) { Node node = nodes[i]; books[i] = node.getLookup().lookup(Book.class); } grid.getCellRendererManager().setDefaultRenderer(new OpenLibraryGridRenderer()); grid.setModel(new DefaultListModel() { @Override public int getSize() { return books.length; } @Override public Object getElementAt(int i) { return books[i]; } }); grid.setUI(new BookshelfUI()); grid.getSelectionModel().addListSelectionListener(new ListSelectionListener() { @Override public void valueChanged(ListSelectionEvent e) { //Somehow compare the selected item //with the list of books and find a matching book: int selectedIndex = grid.getSelectedIndex(); for (int i = 0; i < nodes.length; i++) { String nodeName = books[i].getTitel(); if (String.valueOf(selectedIndex).equals(nodeName)) { try { em.setSelectedNodes(new Node[]{nodes[i]}); } catch (PropertyVetoException ex) { Exceptions.printStackTrace(ex); } } } } }); setViewportView(grid); } } } Above, you see references to OpenLibraryGridRenderer and BookshelfUI, both of which are part of the "JGrid-Bookshelf" sample in the JGrid download. The above is specific for Book objects, i.e., that's one of the samples that comes with the JGrid download. I need to make the above more general, so that any kind of object can be handled without requiring changes to the JGridView. Once you have the above, it's easy to integrate it into a TopComponent, just like any other NetBeans explorer view.

    Read the article

  • How to run WordPress and Java web app running on Tomcat on the same server?

    - by Chantz
    I have to run a WordPress site served via Apache2 & Java-based webapp using Tomcat on the same server. When users come to example.com or example.com/public-pages they need to served from WordPress but when they come to example.com/private-pages they need to be served from the Tomcat. I have asked this question on serverfault where they suggested using different port, different IP & sub-domain. I want to go for different port solution since it will mean I need to buy only one SSL certificate. I tried doing the reverse proxy method by having the following in my default-ssl.conf <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName localhost:443 DocumentRoot /var/www <Directory /var/www> #For Wordpress Options FollowSymLinks AllowOverride All </Directory> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass /private-pages ajp://localhost:8009/ ProxyPassReverse /private-pages ajp://localhost:8009/ SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key </VirtualHost> As you have noticed I am using mod_proxy_ajp in Apache2 for this. And that my Tomcat is listening to port 8009 and then serving content. So now when I go to example.com/private-pages I am seeing the content from my Tomcat. But 2 issues are happening. All my static resources are getting 404-ed, so none of my images, CSS, js are getting loaded. I see that the browser is requesting for the resources using URL example.com/css/* This will clearly not work because it translates to example.com:80/css/* instead of example.com:8009/css/* & there are no such resources in the WordPress directory. If I go to example.com/private-pages/abcd I am somehow kicked to the WordPress site (which obviously displays a 404 page). I can understand why #1 is happening but have no clue why the #2 is happening. Regardless, if there is another clean solution for resolving this, I would appreciate y'alls help.

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >